Skip to content
  • Peter Zijlstra's avatar
    locking: Optimize lock_bh functions · 9ea4c380
    Peter Zijlstra authored
    
    
    Currently all _bh_ lock functions do two preempt_count operations:
    
      local_bh_disable();
      preempt_disable();
    
    and for the unlock:
    
      preempt_enable_no_resched();
      local_bh_enable();
    
    Since its a waste of perfectly good cycles to modify the same variable
    twice when you can do it in one go; use the new
    __local_bh_{dis,en}able_ip() functions that allow us to provide a
    preempt_count value to add/sub.
    
    So define SOFTIRQ_LOCK_OFFSET as the offset a _bh_ lock needs to
    add/sub to be done in one go.
    
    As a bonus it gets rid of the preempt_enable_no_resched() usage.
    
    This reduces a 1000 loops of:
    
      spin_lock_bh(&bh_lock);
      spin_unlock_bh(&bh_lock);
    
    from 53596 cycles to 51995 cycles. I didn't do enough measurements to
    say for absolute sure that the result is significant but the the few
    runs I did for each suggest it is so.
    
    Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
    Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
    Cc: jacob.jun.pan@linux.intel.com
    Cc: Mike Galbraith <bitbucket@online.de>
    Cc: hpa@zytor.com
    Cc: Arjan van de Ven <arjan@linux.intel.com>
    Cc: lenb@kernel.org
    Cc: rjw@rjwysocki.net
    Cc: rui.zhang@intel.com
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Link: http://lkml.kernel.org/r/20131119151338.GF3694@twins.programming.kicks-ass.net
    
    
    Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
    9ea4c380