Skip to content
Snippets Groups Projects
  1. Mar 23, 2011
  2. Mar 22, 2011
  3. Mar 21, 2011
    • Julien Tinnes's avatar
      Prevent rt_sigqueueinfo and rt_tgsigqueueinfo from spoofing the signal code · da48524e
      Julien Tinnes authored
      
      Userland should be able to trust the pid and uid of the sender of a
      signal if the si_code is SI_TKILL.
      
      Unfortunately, the kernel has historically allowed sigqueueinfo() to
      send any si_code at all (as long as it was negative - to distinguish it
      from kernel-generated signals like SIGILL etc), so it could spoof a
      SI_TKILL with incorrect siginfo values.
      
      Happily, it looks like glibc has always set si_code to the appropriate
      SI_QUEUE, so there are probably no actual user code that ever uses
      anything but the appropriate SI_QUEUE flag.
      
      So just tighten the check for si_code (we used to allow any negative
      value), and add a (one-time) warning in case there are binaries out
      there that might depend on using other si_code values.
      
      Signed-off-by: default avatarJulien Tinnes <jln@google.com>
      Acked-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      da48524e
  4. Mar 18, 2011
  5. Mar 17, 2011
    • Milton Miller's avatar
      smp_call_function_interrupt: use typedef and %pf · c8def554
      Milton Miller authored
      
      Use the newly added smp_call_func_t in smp_call_function_interrupt for
      the func variable, and make the comment above the WARN more assertive
      and explicit.  Also, func is a function pointer and does not need an
      offset, so use %pf not %pS.
      
      Signed-off-by: default avatarMilton Miller <miltonm@bga.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c8def554
    • Milton Miller's avatar
      smp_call_function_many: handle concurrent clearing of mask · 723aae25
      Milton Miller authored
      
      Mike Galbraith reported finding a lockup ("perma-spin bug") where the
      cpumask passed to smp_call_function_many was cleared by other cpu(s)
      while a cpu was preparing its call_data block, resulting in no cpu to
      clear the last ref and unlock the block.
      
      Having cpus clear their bit asynchronously could be useful on a mask of
      cpus that might have a translation context, or cpus that need a push to
      complete an rcu window.
      
      Instead of adding a BUG_ON and requiring yet another cpumask copy, just
      detect the race and handle it.
      
      Note: arch_send_call_function_ipi_mask must still handle an empty
      cpumask because the data block is globally visible before the that arch
      callback is made.  And (obviously) there are no guarantees to which cpus
      are notified if the mask is changed during the call; only cpus that were
      online and had their mask bit set during the whole call are guaranteed
      to be called.
      
      Reported-by: default avatarMike Galbraith <efault@gmx.de>
      Reported-by: default avatarJan Beulich <JBeulich@novell.com>
      Acked-by: default avatarJan Beulich <jbeulich@novell.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarMilton Miller <miltonm@bga.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      723aae25
    • Milton Miller's avatar
      call_function_many: add missing ordering · 45a57919
      Milton Miller authored
      
      Paul McKenney's review pointed out two problems with the barriers in the
      2.6.38 update to the smp call function many code.
      
      First, a barrier that would force the func and info members of data to
      be visible before their consumption in the interrupt handler was
      missing.  This can be solved by adding a smp_wmb between setting the
      func and info members and setting setting the cpumask; this will pair
      with the existing and required smp_rmb ordering the cpumask read before
      the read of refs.  This placement avoids the need a second smp_rmb in
      the interrupt handler which would be executed on each of the N cpus
      executing the call request.  (I was thinking this barrier was present
      but was not).
      
      Second, the previous write to refs (establishing the zero that we the
      interrupt handler was testing from all cpus) was performed by a third
      party cpu.  This would invoke transitivity which, as a recient or
      concurrent addition to memory-barriers.txt now explicitly states, would
      require a full smp_mb().
      
      However, we know the cpumask will only be set by one cpu (the data
      owner) and any preivous iteration of the mask would have cleared by the
      reading cpu.  By redundantly writing refs to 0 on the owning cpu before
      the smp_wmb, the write to refs will follow the same path as the writes
      that set the cpumask, which in turn allows us to keep the barrier in the
      interrupt handler a smp_rmb instead of promoting it to a smp_mb (which
      will be be executed by N cpus for each of the possible M elements on the
      list).
      
      I moved and expanded the comment about our (ab)use of the rcu list
      primitives for the concurrent walk earlier into this function.  I
      considered moving the first two paragraphs to the queue list head and
      lock, but felt it would have been too disconected from the code.
      
      Cc: Paul McKinney <paulmck@linux.vnet.ibm.com>
      Cc: stable@kernel.org (2.6.32 and later)
      Signed-off-by: default avatarMilton Miller <miltonm@bga.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      45a57919
Loading