Skip to content
  • Oleg Nesterov's avatar
    [PATCH] timers fixes/improvements · 55c888d6
    Oleg Nesterov authored
    
    
    This patch tries to solve following problems:
    
    1. del_timer_sync() is racy. The timer can be fired again after
       del_timer_sync have checked all cpus and before it will recheck
       timer_pending().
    
    2. It has scalability problems. All cpus are scanned to determine
       if the timer is running on that cpu.
    
       With this patch del_timer_sync is O(1) and no slower than plain
       del_timer(pending_timer), unless it has to actually wait for
       completion of the currently running timer.
    
       The only restriction is that the recurring timer should not use
       add_timer_on().
    
    3. The timers are not serialized wrt to itself.
    
       If CPU_0 does mod_timer(jiffies+1) while the timer is currently
       running on CPU 1, it is quite possible that local interrupt on
       CPU_0 will start that timer before it finished on CPU_1.
    
    4. The timers locking is suboptimal. __mod_timer() takes 3 locks
       at once and still requires wmb() in del_timer/run_timers.
    
       The new implementation takes 2 locks sequentially and does not
       need memory barriers.
    
    Currently ->base != NULL means that the timer is pending. In that case
    ->base.lock is used to lock the timer. __mod_timer also takes timer->lock
    because ->base can be == NULL.
    
    This patch uses timer->entry.next != NULL as indication that the timer is
    pending. So it does __list_del(), entry->next = NULL instead of list_del()
    when the timer is deleted.
    
    The ->base field is used for hashed locking only, it is initialized
    in init_timer() which sets ->base = per_cpu(tvec_bases). When the
    tvec_bases.lock is locked, it means that all timers which are tied
    to this base via timer->base are locked, and the base itself is locked
    too.
    
    So __run_timers/migrate_timers can safely modify all timers which could
    be found on ->tvX lists (pending timers).
    
    When the timer's base is locked, and the timer removed from ->entry list
    (which means that _run_timers/migrate_timers can't see this timer), it is
    possible to set timer->base = NULL and drop the lock: the timer remains
    locked.
    
    This patch adds lock_timer_base() helper, which waits for ->base != NULL,
    locks the ->base, and checks it is still the same.
    
    __mod_timer() schedules the timer on the local CPU and changes it's base.
    However, it does not lock both old and new bases at once. It locks the
    timer via lock_timer_base(), deletes the timer, sets ->base = NULL, and
    unlocks old base. Then __mod_timer() locks new_base, sets ->base = new_base,
    and adds this timer. This simplifies the code, because AB-BA deadlock is not
    possible. __mod_timer() also ensures that the timer's base is not changed
    while the timer's handler is running on the old base.
    
    __run_timers(), del_timer() do not change ->base anymore, they only clear
    pending flag.
    
    So del_timer_sync() can test timer->base->running_timer == timer to detect
    whether it is running or not.
    
    We don't need timer_list->lock anymore, this patch kills it.
    
    We also don't need barriers. del_timer() and __run_timers() used smp_wmb()
    before clearing timer's pending flag. It was needed because __mod_timer()
    did not lock old_base if the timer is not pending, so __mod_timer()->list_add()
    could race with del_timer()->list_del(). With this patch these functions are
    serialized through base->lock.
    
    One problem. TIMER_INITIALIZER can't use per_cpu(tvec_bases). So this patch
    adds global
    
            struct timer_base_s {
                    spinlock_t lock;
                    struct timer_list *running_timer;
            } __init_timer_base;
    
    which is used by TIMER_INITIALIZER. The corresponding fields in tvec_t_base_s
    struct are replaced by struct timer_base_s t_base.
    
    It is indeed ugly. But this can't have scalability problems. The global
    __init_timer_base.lock is used only when __mod_timer() is called for the first
    time AND the timer was compile time initialized. After that the timer migrates
    to the local CPU.
    
    Signed-off-by: default avatarOleg Nesterov <oleg@tv-sign.ru>
    Acked-by: default avatarIngo Molnar <mingo@elte.hu>
    Signed-off-by: default avatarRenaud Lienhart <renaud.lienhart@free.fr>
    Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
    55c888d6