Skip to content
Snippets Groups Projects
  1. Jul 26, 2011
  2. May 06, 2011
  3. Mar 04, 2011
  4. Oct 05, 2010
  5. Aug 19, 2010
    • Paul E. McKenney's avatar
      rcu: define __rcu address space modifier for sparse · ca5ecddf
      Paul E. McKenney authored
      
      This commit provides definitions for the __rcu annotation defined earlier.
      This annotation permits sparse to check for correct use of RCU-protected
      pointers.  If a pointer that is annotated with __rcu is accessed
      directly (as opposed to via rcu_dereference(), rcu_assign_pointer(),
      or one of their variants), sparse can be made to complain.  To enable
      such complaints, use the new default-disabled CONFIG_SPARSE_RCU_POINTER
      kernel configuration option.  Please note that these sparse complaints are
      intended to be a debugging aid, -not- a code-style-enforcement mechanism.
      
      There are special rcu_dereference_protected() and rcu_access_pointer()
      accessors for use when RCU read-side protection is not required, for
      example, when no other CPU has access to the data structure in question
      or while the current CPU hold the update-side lock.
      
      This patch also updates a number of docbook comments that were showing
      their age.
      
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Christopher Li <sparse@chrisli.org>
      Reviewed-by: default avatarJosh Triplett <josh@joshtriplett.org>
      ca5ecddf
  6. Jun 14, 2010
    • Mathieu Desnoyers's avatar
      tree/tiny rcu: Add debug RCU head objects · 551d55a9
      Mathieu Desnoyers authored
      
      Helps finding racy users of call_rcu(), which results in hangs because list
      entries are overwritten and/or skipped.
      
      Changelog since v4:
      - Bissectability is now OK
      - Now generate a WARN_ON_ONCE() for non-initialized rcu_head passed to
        call_rcu(). Statically initialized objects are detected with
        object_is_static().
      - Rename rcu_head_init_on_stack to init_rcu_head_on_stack.
      - Remove init_rcu_head() completely.
      
      Changelog since v3:
      - Include comments from Lai Jiangshan
      
      This new patch version is based on the debugobjects with the newly introduced
      "active state" tracker.
      
      Non-initialized entries are all considered as "statically initialized". An
      activation fixup (triggered by call_rcu()) takes care of performing the debug
      object initialization without issuing any warning. Since we cannot increase the
      size of struct rcu_head, I don't see much room to put an identifier for
      statically initialized rcu_head structures. So for now, we have to live without
      "activation without explicit init" detection. But the main purpose of this debug
      option is to detect double-activations (double call_rcu() use of a rcu_head
      before the callback is executed), which is correctly addressed here.
      
      This also detects potential internal RCU callback corruption, which would cause
      the callbacks to be executed twice.
      
      Signed-off-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      CC: David S. Miller <davem@davemloft.net>
      CC: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      CC: akpm@linux-foundation.org
      CC: mingo@elte.hu
      CC: laijs@cn.fujitsu.com
      CC: dipankar@in.ibm.com
      CC: josh@joshtriplett.org
      CC: dvhltc@us.ibm.com
      CC: niv@us.ibm.com
      CC: tglx@linutronix.de
      CC: peterz@infradead.org
      CC: rostedt@goodmis.org
      CC: Valdis.Kletnieks@vt.edu
      CC: dhowells@redhat.com
      CC: eric.dumazet@gmail.com
      CC: Alexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      551d55a9
  7. May 10, 2010
  8. May 06, 2010
  9. Apr 19, 2010
  10. Mar 16, 2010
  11. Feb 26, 2010
    • Paul E. McKenney's avatar
      rcu: Export rcu_scheduler_active · f5f65409
      Paul E. McKenney authored
      
      Kernel modules using rcu_read_lock_sched_held() must now have
      access to rcu_scheduler_active, so it must be exported.
      
      This should fix the fix for the boot-time RCU-lockdep splat.
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <20100226030230.GA7743@linux.vnet.ibm.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      f5f65409
    • Paul E. McKenney's avatar
      rcu: Make rcu_read_lock_sched_held() take boot time into account · d9f1bb6a
      Paul E. McKenney authored
      
      Before the scheduler starts, all tasks are non-preemptible by
      definition. So, during that time, rcu_read_lock_sched_held()
      needs to always return "true".  This patch makes that be so.
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1267135607-7056-2-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      d9f1bb6a
  12. Feb 25, 2010
    • Paul E. McKenney's avatar
      rcu: Introduce lockdep-based checking to RCU read-side primitives · 632ee200
      Paul E. McKenney authored
      
      Inspection is proving insufficient to catch all RCU misuses,
      which is understandable given that rcu_dereference() might be
      protected by any of four different flavors of RCU (RCU, RCU-bh,
      RCU-sched, and SRCU), and might also/instead be protected by any
      of a number of locking primitives. It is therefore time to
      enlist the aid of lockdep.
      
      This set of patches is inspired by earlier work by Peter
      Zijlstra and Thomas Gleixner, and takes the following approach:
      
      o	Set up separate lockdep classes for RCU, RCU-bh, and RCU-sched.
      
      o	Set up separate lockdep classes for each instance of SRCU.
      
      o	Create primitives that check for being in an RCU read-side
      	critical section.  These return exact answers if lockdep is
      	fully enabled, but if unsure, report being in an RCU read-side
      	critical section.  (We want to avoid false positives!)
      	The primitives are:
      
      	For RCU: rcu_read_lock_held(void)
      
      	For RCU-bh: rcu_read_lock_bh_held(void)
      
      	For RCU-sched: rcu_read_lock_sched_held(void)
      
      	For SRCU: srcu_read_lock_held(struct srcu_struct *sp)
      
      o	Add rcu_dereference_check(), which takes a second argument
      	in which one places a boolean expression based on the above
      	primitives and/or lockdep_is_held().
      
      o	A new kernel configuration parameter, CONFIG_PROVE_RCU, enables
      	rcu_dereference_check().  This depends on CONFIG_PROVE_LOCKING,
      	and should be quite helpful during the transition period while
      	CONFIG_PROVE_RCU-unaware patches are in flight.
      
      The existing rcu_dereference() primitive does no checking, but
      upcoming patches will change that.
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1266887105-1528-1-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      632ee200
  13. Nov 22, 2009
    • Paul E. McKenney's avatar
      rcu: Re-arrange code to reduce #ifdef pain · 6ebb237b
      Paul E. McKenney authored
      
      Remove #ifdefs from kernel/rcupdate.c and
      include/linux/rcupdate.h by moving code to
      include/linux/rcutiny.h, include/linux/rcutree.h, and
      kernel/rcutree.c.
      
      Also remove some definitions that are no longer used.
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1258908830885-git-send-email->
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      6ebb237b
    • Paul E. McKenney's avatar
      rcu: Eliminate unneeded function wrapping · 9f680ab4
      Paul E. McKenney authored
      
      The functions rcu_init() is a wrapper for __rcu_init(), and also
      sets up the CPU-hotplug notifier for rcu_barrier_cpu_hotplug().
      But TINY_RCU doesn't need CPU-hotplug notification, and the
      rcu_barrier_cpu_hotplug() is a simple wrapper for
      rcu_cpu_notify().
      
      So push rcu_init() out to kernel/rcutree.c and kernel/rcutiny.c
      and get rid of the wrapper function rcu_barrier_cpu_hotplug().
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12589088302320-git-send-email->
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      9f680ab4
  14. Oct 26, 2009
    • Paul E. McKenney's avatar
      rcu: "Tiny RCU", The Bloatwatch Edition · 9b1d82fa
      Paul E. McKenney authored
      This patch is a version of RCU designed for !SMP provided for a
      small-footprint RCU implementation.  In particular, the
      implementation of synchronize_rcu() is extremely lightweight and
      high performance. It passes rcutorture testing in each of the
      four relevant configurations (combinations of NO_HZ and PREEMPT)
      on x86.  This saves about 1K bytes compared to old Classic RCU
      (which is no longer in mainline), and more than three kilobytes
      compared to Hierarchical RCU (updated to 2.6.30):
      
      	CONFIG_TREE_RCU:
      
      	   text	   data	    bss	    dec	    filename
      	    183       4       0     187     kernel/rcupdate.o
      	   2783     520      36    3339     kernel/rcutree.o
      				   3526 Total (vs 4565 for v7)
      
      	CONFIG_TREE_PREEMPT_RCU:
      
      	   text	   data	    bss	    dec	    filename
      	    263       4       0     267     kernel/rcupdate.o
      	   4594     776      52    5422     kernel/rcutree.o
      	   			   5689 Total (6155 for v7)
      
      	CONFIG_TINY_RCU:
      
      	   text	   data	    bss	    dec	    filename
      	     96       4       0     100     kernel/rcupdate.o
      	    734      24       0     758     kernel/rcutiny.o
      	    			    858 Total (vs 848 for v7)
      
      The above is for x86.  Your mileage may vary on other platforms.
      Further compression is possible, but is being procrastinated.
      
      Changes from v7 (http://lkml.org/lkml/2009/10/9/388)
      
      o	Apply Lai Jiangshan's review comments (aside from
      might_sleep() 	in synchronize_sched(), which is covered by SMP builds).
      
      o	Fix up expedited primitives.
      
      Changes from v6 (http://lkml.org/lkml/2009/9/23/293).
      
      o	Forward ported to put it into the 2.6.33 stream.
      
      o	Added lockdep support.
      
      o	Make lightweight rcu_barrier.
      
      Changes from v5 (http://lkml.org/lkml/2009/6/23/12).
      
      o	Ported to latest pre-2.6.32 merge window kernel.
      
      	- Renamed rcu_qsctr_inc() to rcu_sched_qs().
      	- Renamed rcu_bh_qsctr_inc() to rcu_bh_qs().
      	- Provided trivial rcu_cpu_notify().
      	- Provided trivial exit_rcu().
      	- Provided trivial rcu_needs_cpu().
      	- Fixed up the rcu_*_enter/exit() functions in linux/hardirq.h.
      
      o	Removed the dependence on EMBEDDED, with a view to making
      	TINY_RCU default for !SMP at some time in the future.
      
      o	Added (trivial) support for expedited grace periods.
      
      Changes from v4 (http://lkml.org/lkml/2009/5/2/91) include:
      
      o	Squeeze the size down a bit further by removing the
      	->completed field from struct rcu_ctrlblk.
      
      o	This permits synchronize_rcu() to become the empty function.
      	Previous concerns about rcutorture were unfounded, as
      	rcutorture correctly handles a constant value from
      	rcu_batches_completed() and rcu_batches_completed_bh().
      
      Changes from v3 (http://lkml.org/lkml/2009/3/29/221) include:
      
      o	Changed rcu_batches_completed(), rcu_batches_completed_bh()
      	rcu_enter_nohz(), rcu_exit_nohz(), rcu_nmi_enter(), and
      	rcu_nmi_exit(), to be static inlines, as suggested by David
      	Howells.  Doing this saves about 100 bytes from rcutiny.o.
      	(The numbers between v3 and this v4 of the patch are not directly
      	comparable, since they are against different versions of Linux.)
      
      Changes from v2 (http://lkml.org/lkml/2009/2/3/333) include:
      
      o	Fix whitespace issues.
      
      o	Change short-circuit "||" operator to instead be "+" in order
      to 	fix performance bug noted by "kraai" on LWN.
      
      		(http://lwn.net/Articles/324348/)
      
      Changes from v1 (http://lkml.org/lkml/2009/1/13/440
      
      ) include:
      
      o	This version depends on EMBEDDED as well as !SMP, as suggested
      	by Ingo.
      
      o	Updated rcu_needs_cpu() to unconditionally return zero,
      	permitting the CPU to enter dynticks-idle mode at any time.
      	This works because callbacks can be invoked upon entry to
      	dynticks-idle mode.
      
      o	Paul is now OK with this being included, based on a poll at
      the 	Kernel Miniconf at linux.conf.au, where about ten people said
      	that they cared about saving 900 bytes on single-CPU systems.
      
      o	Applies to both mainline and tip/core/rcu.
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: default avatarDavid Howells <dhowells@redhat.com>
      Acked-by: default avatarJosh Triplett <josh@joshtriplett.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: avi@redhat.com
      Cc: mtosatti@redhat.com
      LKML-Reference: <12565226351355-git-send-email->
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      9b1d82fa
  15. Oct 07, 2009
    • Paul E. McKenney's avatar
      rcu: Move rcu_barrier() to rcutree · d0ec774c
      Paul E. McKenney authored
      
      Move the existing rcu_barrier() implementation to rcutree.c,
      consistent with the fact that the rcu_barrier() implementation is
      tied quite tightly to the RCU implementation.
      
      This opens the way to simplify and fix rcutree.c's rcu_barrier()
      implementation in a later patch.
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: akpm@linux-foundation.org
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12548908982563-git-send-email->
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      d0ec774c
  16. Oct 05, 2009
  17. Sep 23, 2009
  18. Sep 19, 2009
    • Paul E. McKenney's avatar
      rcu: Fix whitespace inconsistencies · a71fca58
      Paul E. McKenney authored
      
      Fix a number of whitespace ^Ierrors in the include/linux/rcu*
      and the kernel/rcu* files.
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: akpm@linux-foundation.org
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      LKML-Reference: <20090918172819.GA24405@linux.vnet.ibm.com>
      [ did more checkpatch fixlets ]
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      a71fca58
  19. Sep 17, 2009
    • Paul E. McKenney's avatar
      rcu: Fix synchronize_rcu() for TREE_PREEMPT_RCU · 16e30811
      Paul E. McKenney authored
      
      The redirection of synchronize_sched() to synchronize_rcu() was
      appropriate for TREE_RCU, but not for TREE_PREEMPT_RCU.
      
      Fix this by creating an underlying synchronize_sched().  TREE_RCU
      then redirects synchronize_rcu() to synchronize_sched(), while
      TREE_PREEMPT_RCU has its own version of synchronize_rcu().
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: akpm@linux-foundation.org
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      LKML-Reference: <12528585111916-git-send-email->
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      16e30811
  20. Aug 19, 2009
    • Paul E. McKenney's avatar
      rcu: Delay rcu_barrier() wait until beginning of next CPU-hotunplug operation. · 1423cc03
      Paul E. McKenney authored
      
      Ingo Molnar reported this lockup:
      
       [  200.380003] Hangcheck: hangcheck value past margin!
       [  248.192003] INFO: task S99local:2974 blocked for more than 120 seconds.
       [  248.194532] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
       [  248.202330] S99local      D 0000000c  6256  2974   2687 0x00000000
       [  248.208929]  9c7ebe90 00000086 6b67ef8b 0000000c 9f25a610 81a69869 00000001 820b6990
       [  248.216123]  820b6990 820b6990 9c6e4c20 9c6e4eb4 82c78990 00000000 6b993559 0000000c
       [  248.220616]  9c7ebe90 8105f22a 9c6e4eb4 9c6e4c20 00000001 9c7ebe98 9c7ebeb4 81a65cb3
       [  248.229990] Call Trace:
       [  248.234049]  [<81a69869>] ? _spin_unlock_irqrestore+0x22/0x37
       [  248.239769]  [<8105f22a>] ? prepare_to_wait+0x48/0x4e
       [  248.244796]  [<81a65cb3>] rcu_barrier_cpu_hotplug+0xaa/0xc9
       [  248.250343]  [<8105f029>] ? autoremove_wake_function+0x0/0x38
       [  248.256063]  [<81062cf2>] notifier_call_chain+0x49/0x71
       [  248.261263]  [<81062da0>] raw_notifier_call_chain+0x11/0x13
       [  248.266809]  [<81a0b475>] _cpu_down+0x272/0x288
       [  248.271316]  [<81a0b4d5>] cpu_down+0x4a/0xa2
       [  248.275563]  [<81a0c48a>] store_online+0x2a/0x5e
       [  248.280156]  [<81a0c460>] ? store_online+0x0/0x5e
       [  248.284836]  [<814ddc35>] sysdev_store+0x20/0x28
       [  248.289429]  [<8112e403>] sysfs_write_file+0xb8/0xe3
       [  248.294369]  [<8112e34b>] ? sysfs_write_file+0x0/0xe3
       [  248.299396]  [<810e4c8f>] vfs_write+0x91/0x120
       [  248.303817]  [<810e4dc1>] sys_write+0x40/0x65
       [  248.308150]  [<81002d73>] sysenter_do_call+0x12/0x28
      
      This change moves an RCU grace period delay off of the
      critical path for CPU-hotunplug operations.
      
      Since RCU callback migration is only performed on
      CPU-hotunplug operations, and since the rcu_barrier() race is
      provoked only by consecutive CPU-hotunplug operations, it is
      not necessary to delay the end of a given CPU-hotunplug
      operation.
      
      We can instead choose to delay the beginning of the next
      CPU-hotunplug operation.
      
      Reported-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josht@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: akpm@linux-foundation.org
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: hugh.dickins@tiscali.co.uk
      Cc: benh@kernel.crashing.org
      LKML-Reference: <20090819060614.GA14383@linux.vnet.ibm.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      1423cc03
  21. Aug 15, 2009
    • Paul E. McKenney's avatar
      rcu: Simplify RCU CPU-hotplug notification · 2e597558
      Paul E. McKenney authored
      
      Use the new cpu_notifier() API to simplify RCU's CPU-hotplug
      notifiers, collapsing down to a single such notifier.
      
      This makes it trivial to provide the notifier-ordering
      guarantee that rcu_barrier() depends on.
      
      Also remove redundant open_softirq() calls from Hierarchical
      RCU notifier.
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: josht@linux.vnet.ibm.com
      Cc: akpm@linux-foundation.org
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: hugh.dickins@tiscali.co.uk
      Cc: benh@kernel.crashing.org
      LKML-Reference: <12503552312510-git-send-email->
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      2e597558
  22. Jul 03, 2009
    • Paul E. McKenney's avatar
      rcu: Add synchronize_sched_expedited() primitive · 03b042bf
      Paul E. McKenney authored
      
      This adds the synchronize_sched_expedited() primitive that
      implements the "big hammer" expedited RCU grace periods.
      
      This primitive is placed in kernel/sched.c rather than
      kernel/rcupdate.c due to its need to interact closely with the
      migration_thread() kthread.
      
      The idea is to wake up this kthread with req->task set to NULL,
      in response to which the kthread reports the quiescent state
      resulting from the kthread having been scheduled.
      
      Because this patch needs to fallback to the slow versions of
      the primitives in response to some races with CPU onlining and
      offlining, a new synchronize_rcu_bh() primitive is added as
      well.
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: akpm@linux-foundation.org
      Cc: torvalds@linux-foundation.org
      Cc: davem@davemloft.net
      Cc: dada1@cosmosbay.com
      Cc: zbr@ioremap.net
      Cc: jeff.chua.linux@gmail.com
      Cc: paulus@samba.org
      Cc: laijs@cn.fujitsu.com
      Cc: jengelh@medozas.de
      Cc: r000n@r000n.net
      Cc: benh@kernel.crashing.org
      Cc: mathieu.desnoyers@polymtl.ca
      LKML-Reference: <12459460982947-git-send-email->
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      03b042bf
  23. Apr 15, 2009
    • David Howells's avatar
      RCU: Don't try and predeclare inline funcs as it upsets some versions of gcc · 5b1d07ed
      David Howells authored
      
      Don't try and predeclare inline funcs like this:
      
      	static inline void wait_migrated_callbacks(void)
      	...
      	static void _rcu_barrier(enum rcu_barrier type)
      	{
      		...
      		wait_migrated_callbacks();
      	}
      	...
      	static inline void wait_migrated_callbacks(void)
      	{
      		wait_event(rcu_migrate_wq, !atomic_read(&rcu_migrate_type_count));
      	}
      
      as it upsets some versions of gcc under some circumstances:
      
      	kernel/rcupdate.c: In function `_rcu_barrier':
      	kernel/rcupdate.c:125: sorry, unimplemented: inlining failed in call to 'wait_migrated_callbacks': function body not available
      	kernel/rcupdate.c:152: sorry, unimplemented: called from here
      
      This can be dealt with by simply putting the static variables (rcu_migrate_*)
      at the top, and moving the implementation of the function up so that it
      replaces its forward declaration.
      
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Cc: Dipankar Sarma <dipankar@in.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5b1d07ed
  24. Mar 30, 2009
    • Lai Jiangshan's avatar
      rcu: rcu_barrier VS cpu_hotplug: Ensure callbacks in dead cpu are migrated to online cpu · f69b17d7
      Lai Jiangshan authored
      
      cpu hotplug may happen asynchronously, some rcu callbacks are maybe
      still on dead cpu, rcu_barrier() also needs to wait for these rcu
      callbacks to complete, so we must ensure callbacks in dead cpu are
      migrated to online cpu.
      
      Paul E. McKenney's review:
      
        Good stuff, Lai!!!  Simpler than any of the approaches that I was
        considering, and, better yet, independent of the underlying RCU
        implementation!!!
      
        I was initially worried that wake_up() might wake only one of two
        possible wait_event()s, namely rcu_barrier() and the CPU_POST_DEAD code,
        but the fact that wait_event() clears WQ_FLAG_EXCLUSIVE avoids that issue.
        I was also worried about the fact that different RCU implementations have
        different mappings of call_rcu(), call_rcu_bh(), and call_rcu_sched(), but
        this is OK as well because we just get an extra (harmless) callback in the
        case that they map together (for example, Classic RCU has call_rcu_sched()
        mapping to call_rcu()).
      
        Overlap of CPU-hotplug operations is prevented by cpu_add_remove_lock,
        and any stray callbacks that arrive (for example, from irq handlers
        running on the dying CPU) either are ahead of the CPU_DYING callbacks on
        the one hand (and thus accounted for), or happened after the rcu_barrier()
        started on the other (and thus don't need to be accounted for).
      
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Reviewed-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      LKML-Reference: <49C36476.1010400@cn.fujitsu.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      f69b17d7
  25. Feb 25, 2009
    • Paul E. McKenney's avatar
      rcu: Teach RCU that idle task is not quiscent state at boot · a6826048
      Paul E. McKenney authored
      
      This patch fixes a bug located by Vegard Nossum with the aid of
      kmemcheck, updated based on review comments from Nick Piggin,
      Ingo Molnar, and Andrew Morton.  And cleans up the variable-name
      and function-name language.  ;-)
      
      The boot CPU runs in the context of its idle thread during boot-up.
      During this time, idle_cpu(0) will always return nonzero, which will
      fool Classic and Hierarchical RCU into deciding that a large chunk of
      the boot-up sequence is a big long quiescent state.  This in turn causes
      RCU to prematurely end grace periods during this time.
      
      This patch changes the rcutree.c and rcuclassic.c rcu_check_callbacks()
      function to ignore the idle task as a quiescent state until the
      system has started up the scheduler in rest_init(), introducing a
      new non-API function rcu_idle_now_means_idle() to inform RCU of this
      transition.  RCU maintains an internal rcu_idle_cpu_truthful variable
      to track this state, which is then used by rcu_check_callback() to
      determine if it should believe idle_cpu().
      
      Because this patch has the effect of disallowing RCU grace periods
      during long stretches of the boot-up sequence, this patch also introduces
      Josh Triplett's UP-only optimization that makes synchronize_rcu() be a
      no-op if num_online_cpus() returns 1.  This allows boot-time code that
      calls synchronize_rcu() to proceed normally.  Note, however, that RCU
      callbacks registered by call_rcu() will likely queue up until later in
      the boot sequence.  Although rcuclassic and rcutree can also use this
      same optimization after boot completes, rcupreempt must restrict its
      use of this optimization to the portion of the boot sequence before the
      scheduler starts up, given that an rcupreempt RCU read-side critical
      section may be preeempted.
      
      In addition, this patch takes Nick Piggin's suggestion to make the
      system_state global variable be __read_mostly.
      
      Changes since v4:
      
      o	Changes the name of the introduced function and variable to
      	be less emotional.  ;-)
      
      Changes since v3:
      
      o	WARN_ON(nr_context_switches() > 0) to verify that RCU
      	switches out of boot-time mode before the first context
      	switch, as suggested by Nick Piggin.
      
      Changes since v2:
      
      o	Created rcu_blocking_is_gp() internal-to-RCU API that
      	determines whether a call to synchronize_rcu() is itself
      	a grace period.
      
      o	The definition of rcu_blocking_is_gp() for rcuclassic and
      	rcutree checks to see if but a single CPU is online.
      
      o	The definition of rcu_blocking_is_gp() for rcupreempt
      	checks to see both if but a single CPU is online and if
      	the system is still in early boot.
      
      	This allows rcupreempt to again work correctly if running
      	on a single CPU after booting is complete.
      
      o	Added check to rcupreempt's synchronize_sched() for there
      	being but one online CPU.
      
      Tested all three variants both SMP and !SMP, booted fine, passed a short
      rcutorture test on both x86 and Power.
      
      Located-by: default avatarVegard Nossum <vegard.nossum@gmail.com>
      Tested-by: default avatarVegard Nossum <vegard.nossum@gmail.com>
      Tested-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      a6826048
  26. Jan 05, 2009
  27. Oct 21, 2008
    • Lai Jiangshan's avatar
      rcupdate: fix bug of rcu_barrier*() · 5f865151
      Lai Jiangshan authored
      
      current rcu_barrier_bh() is like this:
      
      void rcu_barrier_bh(void)
      {
      	BUG_ON(in_interrupt());
      	/* Take cpucontrol mutex to protect against CPU hotplug */
      	mutex_lock(&rcu_barrier_mutex);
      	init_completion(&rcu_barrier_completion);
      	atomic_set(&rcu_barrier_cpu_count, 0);
      	/*
      	 * The queueing of callbacks in all CPUs must be atomic with
      	 * respect to RCU, otherwise one CPU may queue a callback,
      	 * wait for a grace period, decrement barrier count and call
      	 * complete(), while other CPUs have not yet queued anything.
      	 * So, we need to make sure that grace periods cannot complete
      	 * until all the callbacks are queued.
      	 */
      	rcu_read_lock();
      	on_each_cpu(rcu_barrier_func, (void *)RCU_BARRIER_BH, 1);
      	rcu_read_unlock();
      	wait_for_completion(&rcu_barrier_completion);
      	mutex_unlock(&rcu_barrier_mutex);
      }
      
      The inconsistency of the code and the comments show a bug here.
      rcu_read_lock() cannot make sure that "grace periods for RCU_BH
      cannot complete until all the callbacks are queued".
      it only make sure that race periods for RCU cannot complete
      until all the callbacks are queued.
      
      so we must use rcu_read_lock_bh() for rcu_barrier_bh().
      like this:
      
      void rcu_barrier_bh(void)
      {
      	......
      	rcu_read_lock_bh();
      	on_each_cpu(rcu_barrier_func, (void *)RCU_BARRIER_BH, 1);
      	rcu_read_unlock_bh();
      	......
      }
      
      and also rcu_barrier() rcu_barrier_sched() are implemented like this.
      it will bring a lot of duplicate code. My patch uses another way to
      fix this bug, please see the comment of my patch.
      Thank Paul E. McKenney for he rewrote the comment.
      
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Reviewed-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      5f865151
  28. Aug 21, 2008
  29. Jun 26, 2008
  30. May 19, 2008
    • Paul E. McKenney's avatar
      rcu: add rcu_barrier_sched() and rcu_barrier_bh() · 70f12f84
      Paul E. McKenney authored
      
      Add rcu_barrier_sched() and rcu_barrier_bh().  With these in place,
      rcutorture no longer gives the occasional oops when repeatedly starting
      and stopping torturing rcu_bh.  Also adds the API needed to flush out
      pre-existing call_rcu_sched() callbacks.
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      70f12f84
    • Paul E. McKenney's avatar
      rcu: add call_rcu_sched() · 4446a36f
      Paul E. McKenney authored
      
      Fourth cut of patch to provide the call_rcu_sched().  This is again to
      synchronize_sched() as call_rcu() is to synchronize_rcu().
      
      Should be fine for experimental and -rt use, but not ready for inclusion.
      With some luck, I will be able to tell Andrew to come out of hiding on
      the next round.
      
      Passes multi-day rcutorture sessions with concurrent CPU hotplugging.
      
      Fixes since the first version include a bug that could result in
      indefinite blocking (spotted by Gautham Shenoy), better resiliency
      against CPU-hotplug operations, and other minor fixes.
      
      Fixes since the second version include reworking grace-period detection
      to avoid deadlocks that could happen when running concurrently with
      CPU hotplug, adding Mathieu's fix to avoid the softlockup messages,
      as well as Mathieu's fix to allow use earlier in boot.
      
      Fixes since the third version include a wrong-CPU bug spotted by
      Andrew, getting rid of the obsolete synchronize_kernel API that somehow
      snuck back in, merging spin_unlock() and local_irq_restore() in a
      few places, commenting the code that checks for quiescent states based
      on interrupting from user-mode execution or the idle loop, removing
      some inline attributes, and some code-style changes.
      
      Known/suspected shortcomings:
      
      o	I still do not entirely trust the sleep/wakeup logic.  Next step
      	will be to use a private snapshot of the CPU online mask in
      	rcu_sched_grace_period() -- if the CPU wasn't there at the start
      	of the grace period, we don't need to hear from it.  And the
      	bit about accounting for changes in online CPUs inside of
      	rcu_sched_grace_period() is ugly anyway.
      
      o	It might be good for rcu_sched_grace_period() to invoke
      	resched_cpu() when a given CPU wasn't responding quickly,
      	but resched_cpu() is declared static...
      
      This patch also fixes a long-standing bug in the earlier preemptable-RCU
      implementation of synchronize_rcu() that could result in loss of
      concurrent external changes to a task's CPU affinity mask.  I still cannot
      remember who reported this...
      
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      4446a36f
  31. Feb 13, 2008
  32. Jan 25, 2008
Loading