1. 13 Sep, 2016 1 commit
  2. 09 Sep, 2016 1 commit
  3. 30 Aug, 2016 1 commit
    • Josh Poimboeuf's avatar
      mm/usercopy: get rid of CONFIG_DEBUG_STRICT_USER_COPY_CHECKS · 0d025d27
      Josh Poimboeuf authored
      There are three usercopy warnings which are currently being silenced for
      gcc 4.6 and newer:
      
      1) "copy_from_user() buffer size is too small" compile warning/error
      
         This is a static warning which happens when object size and copy size
         are both const, and copy size > object size.  I didn't see any false
         positives for this one.  So the function warning attribute seems to
         be working fine here.
      
         Note this scenario is always a bug and so I think it should be
         changed to *always* be an error, regardless of
         CONFIG_DEBUG_STRICT_USER_COPY_CHECKS.
      
      2) "copy_from_user() buffer size is not provably correct" compile warning
      
         This is another static warning which happens when I enable
         __compiletime_object_size() for new compilers (and
         CONFIG_DEBUG_STRICT_USER_COPY_CHECKS).  It happens when object size
         is const, but copy size is *not*.  In this case there's no way to
         compare the two at build time, so it gives the warning.  (Note the
         warning is a byproduct of the fact that gcc has no way of knowing
         whether the overflow function will be called, so the call isn't dead
         code and the warning attribute is activated.)
      
         So this warning seems to only indicate "this is an unusual pattern,
         maybe you should check it out" rather than "this is a bug".
      
         I get 102(!) of these warnings with allyesconfig and the
         __compiletime_object_size() gcc check removed.  I don't know if there
         are any real bugs hiding in there, but from looking at a small
         sample, I didn't see any.  According to Kees, it does sometimes find
         real bugs.  But the false positive rate seems high.
      
      3) "Buffer overflow detected" runtime warning
      
         This is a runtime warning where object size is const, and copy size >
         object size.
      
      All three warnings (both static and runtime) were completely disabled
      for gcc 4.6 with the following commit:
      
        2fb0815c ("gcc4: disable __compiletime_object_size for GCC 4.6+")
      
      That commit mistakenly assumed that the false positives were caused by a
      gcc bug in __compiletime_object_size().  But in fact,
      __compiletime_object_size() seems to be working fine.  The false
      positives were instead triggered by #2 above.  (Though I don't have an
      explanation for why the warnings supposedly only started showing up in
      gcc 4.6.)
      
      So remove warning #2 to get rid of all the false positives, and re-enable
      warnings #1 and #3 by reverting the above commit.
      
      Furthermore, since #1 is a real bug which is detected at compile time,
      upgrade it to always be an error.
      
      Having done all that, CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is no longer
      needed.
      Signed-off-by: 's avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      0d025d27
  4. 10 Aug, 2016 1 commit
  5. 26 Jul, 2016 2 commits
  6. 14 Jul, 2016 2 commits
    • Dmitry Vyukov's avatar
      vmlinux.lds: account for destructor sections · e41f501d
      Dmitry Vyukov authored
      If CONFIG_KASAN is enabled and gcc is configured with
      --disable-initfini-array and/or gold linker is used, gcc emits
      .ctors/.dtors and .text.startup/.text.exit sections instead of
      .init_array/.fini_array.  .dtors section is not explicitly accounted in
      the linker script and messes vvar/percpu layout.
      
      We want:
        ffffffff822bfd80 D _edata
        ffffffff822c0000 D __vvar_beginning_hack
        ffffffff822c0000 A __vvar_page
        ffffffff822c0080 0000000000000098 D vsyscall_gtod_data
        ffffffff822c1000 A __init_begin
        ffffffff822c1000 D init_per_cpu__irq_stack_union
        ffffffff822c1000 A __per_cpu_load
        ffffffff822d3000 D init_per_cpu__gdt_page
      
      We got:
        ffffffff8279a600 D _edata
        ffffffff8279b000 A __vvar_page
        ffffffff8279c000 A __init_begin
        ffffffff8279c000 D init_per_cpu__irq_stack_union
        ffffffff8279c000 A __per_cpu_load
        ffffffff8279e000 D __vvar_beginning_hack
        ffffffff8279e080 0000000000000098 D vsyscall_gtod_data
        ffffffff827ae000 D init_per_cpu__gdt_page
      
      This happens because __vvar_page and .vvar get different addresses in
      arch/x86/kernel/vmlinux.lds.S:
      
      	. = ALIGN(PAGE_SIZE);
      	__vvar_page = .;
      
      	.vvar : AT(ADDR(.vvar) - LOAD_OFFSET) {
      		/* work around gold bug 13023 */
      		__vvar_beginning_hack = .;
      
      Discard .dtors/.fini_array/.text.exit, since we don't call dtors.
      Merge .text.startup into init text.
      
      Link: http://lkml.kernel.org/r/1467386363-120030-1-git-send-email-dvyukov@google.comSigned-off-by: 's avatarDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: 's avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: <stable@vger.kernel.org>	[4.0+]
      Signed-off-by: 's avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      e41f501d
    • Rik van Riel's avatar
      sched/cputime: Count actually elapsed irq & softirq time · 57430218
      Rik van Riel authored
      Currently, if there was any irq or softirq time during 'ticks'
      jiffies, the entire period will be accounted as irq or softirq
      time.
      
      This is inaccurate if only a subset of the time was actually spent
      handling irqs, and could conceivably mis-count all of the ticks during
      a period as irq time, when there was some irq and some softirq time.
      
      This can actually happen when irqtime_account_process_tick is called
      from account_idle_ticks, which can pass a larger number of ticks down
      all at once.
      
      Fix this by changing irqtime_account_hi_update(), irqtime_account_si_update(),
      and steal_account_process_ticks() to work with cputime_t time units, and
      return the amount of time spent in each mode.
      
      Rename steal_account_process_ticks() to steal_account_process_time(), to
      reflect that time is now accounted in cputime_t, instead of ticks.
      
      Additionally, have irqtime_account_process_tick() take into account how
      much time was spent in each of steal, irq, and softirq time.
      
      The latter could help improve the accuracy of cputime
      accounting when returning from idle on a NO_HZ_IDLE CPU.
      
      Properly accounting how much time was spent in hardirq and
      softirq time will also allow the NO_HZ_FULL code to re-use
      these same functions for hardirq and softirq accounting.
      Signed-off-by: 's avatarRik van Riel <riel@redhat.com>
      [ Make nsecs_to_cputime64() actually return cputime64_t. ]
      Signed-off-by: 's avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krcmar <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Link: http://lkml.kernel.org/r/1468421405-20056-2-git-send-email-fweisbec@gmail.comSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      57430218
  7. 07 Jul, 2016 1 commit
    • Davidlohr Bueso's avatar
      locking/atomic: Introduce inc/dec variants for the atomic_fetch_$op() API · f0662863
      Davidlohr Bueso authored
      With the inclusion of atomic FETCH-OP variants, many places in the
      kernel can make use of atomic_fetch_$op() to avoid the callers that
      need to compute the value/state _before_ the operation.
      
      Peter Zijlstra laid out the machinery but we are still missing the
      simpler dec,inc() calls (which future patches will make use of).
      
      This patch only deals with the generic code, as at least right now
      no arch actually implement them -- which is similar to what the
      OP-RETURN primitives currently do.
      Signed-off-by: 's avatarDavidlohr Bueso <dbueso@suse.de>
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: James.Bottomley@HansenPartnership.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: awalls@md.metrocast.net
      Cc: bp@alien8.de
      Cc: cw00.choi@samsung.com
      Cc: davem@davemloft.net
      Cc: dledford@redhat.com
      Cc: dougthompson@xmission.com
      Cc: gregkh@linuxfoundation.org
      Cc: hans.verkuil@cisco.com
      Cc: heiko.carstens@de.ibm.com
      Cc: jikos@kernel.org
      Cc: kys@microsoft.com
      Cc: mchehab@osg.samsung.com
      Cc: pfg@sgi.com
      Cc: schwidefsky@de.ibm.com
      Cc: sean.hefty@intel.com
      Cc: sumit.semwal@linaro.org
      Link: http://lkml.kernel.org/r/20160628215651.GA20048@linux-80c1.suseSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      f0662863
  8. 28 Jun, 2016 2 commits
  9. 20 Jun, 2016 1 commit
  10. 16 Jun, 2016 2 commits
    • Peter Zijlstra's avatar
      locking/atomic: Remove linux/atomic.h:atomic_fetch_or() · b53d6bed
      Peter Zijlstra authored
      Since all architectures have this implemented now natively, remove this
      dead code.
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      b53d6bed
    • Peter Zijlstra's avatar
      locking/atomic: Implement... · 28aa2bda
      Peter Zijlstra authored
      locking/atomic: Implement atomic{,64,_long}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}()
      
      Now that all the architectures have implemented support for these new
      atomic primitives add on the generic infrastructure to expose and use
      it.
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      28aa2bda
  11. 14 Jun, 2016 2 commits
    • Peter Zijlstra's avatar
      locking/spinlock, arch: Update and fix spin_unlock_wait() implementations · 726328d9
      Peter Zijlstra authored
      This patch updates/fixes all spin_unlock_wait() implementations.
      
      The update is in semantics; where it previously was only a control
      dependency, we now upgrade to a full load-acquire to match the
      store-release from the spin_unlock() we waited on. This ensures that
      when spin_unlock_wait() returns, we're guaranteed to observe the full
      critical section we waited on.
      
      This fixes a number of spin_unlock_wait() users that (not
      unreasonably) rely on this.
      
      I also fixed a number of ticket lock versions to only wait on the
      current lock holder, instead of for a full unlock, as this is
      sufficient.
      
      Furthermore; again for ticket locks; I added an smp_rmb() in between
      the initial ticket load and the spin loop testing the current value
      because I could not convince myself the address dependency is
      sufficient, esp. if the loads are of different sizes.
      
      I'm more than happy to remove this smp_rmb() again if people are
      certain the address dependency does indeed work as expected.
      
      Note: PPC32 will be fixed independently
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: chris@zankel.net
      Cc: cmetcalf@mellanox.com
      Cc: davem@davemloft.net
      Cc: dhowells@redhat.com
      Cc: james.hogan@imgtec.com
      Cc: jejb@parisc-linux.org
      Cc: linux@armlinux.org.uk
      Cc: mpe@ellerman.id.au
      Cc: ralf@linux-mips.org
      Cc: realmz6@gmail.com
      Cc: rkuo@codeaurora.org
      Cc: rth@twiddle.net
      Cc: schwidefsky@de.ibm.com
      Cc: tony.luck@intel.com
      Cc: vgupta@synopsys.com
      Cc: ysato@users.sourceforge.jp
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      726328d9
    • Peter Zijlstra's avatar
      locking/barriers: Move smp_cond_load_acquire() to asm-generic/barrier.h · 7cb45c0f
      Peter Zijlstra authored
      Since all asm/barrier.h should/must include asm-generic/barrier.h the
      latter is a good place for generic infrastructure like this.
      
      This also allows archs to override the new smp_acquire__after_ctrl_dep().
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      7cb45c0f
  12. 13 Jun, 2016 1 commit
  13. 08 Jun, 2016 5 commits
    • Pan Xinhui's avatar
      locking/qspinlock: Use atomic_sub_return_release() in queued_spin_unlock() · ca50e426
      Pan Xinhui authored
      The existing version uses a heavy barrier while only release semantics
      is required. So use atomic_sub_return_release() instead.
      Suggested-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: 's avatarPan Xinhui <xinhui.pan@linux.vnet.ibm.com>
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: arnd@arndb.de
      Cc: waiman.long@hp.com
      Link: http://lkml.kernel.org/r/1464943094-3129-1-git-send-email-xinhui.pan@linux.vnet.ibm.comSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      ca50e426
    • Peter Zijlstra's avatar
      locking/mutex: Optimize mutex_trylock() fast-path · 6428671b
      Peter Zijlstra authored
      A while back Viro posted a number of 'interesting' mutex_is_locked()
      users on IRC, one of those was RCU.
      
      RCU seems to use mutex_is_locked() to avoid doing mutex_trylock(), the
      regular load before modify pattern.
      
      While the use isn't wrong per se, its curious in that its needed at all,
      mutex_trylock() should be good enough on its own to avoid the pointless
      cacheline bounces.
      
      So fix those and remove the mutex_is_locked() (ab)use from RCU.
      Reported-by: 's avatarAl Viro <viro@ZenIV.linux.org.uk>
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: 's avatarPaul McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: 's avatarDavidlohr Bueso <dave@stgolabs.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman Long <Waiman.Long@hpe.com>
      Link: http://lkml.kernel.org/r/20160601185815.GW3190@twins.programming.kicks-ass.netSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      6428671b
    • Jason Low's avatar
      locking/rwsem: Remove rwsem_atomic_add() and rwsem_atomic_update() · d157bd86
      Jason Low authored
      The rwsem-xadd count has been converted to an atomic variable and the
      rwsem code now directly uses atomic_long_add() and
      atomic_long_add_return(), so we can remove the arch implementations of
      rwsem_atomic_add() and rwsem_atomic_update().
      Signed-off-by: 's avatarJason Low <jason.low2@hpe.com>
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Jason Low <jason.low2@hp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Hurley <peter@hurleysoftware.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Terry Rudd <terry.rudd@hpe.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Waiman Long <Waiman.Long@hpe.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      d157bd86
    • Jason Low's avatar
      locking/rwsem: Convert sem->count to 'atomic_long_t' · 8ee62b18
      Jason Low authored
      Convert the rwsem count variable to an atomic_long_t since we use it
      as an atomic variable. This also allows us to remove the
      rwsem_atomic_{add,update}() "abstraction" which would now be an unnecesary
      level of indirection. In follow up patches, we also remove the
      rwsem_atomic_{add,update}() definitions across the various architectures.
      Suggested-by: 's avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: 's avatarJason Low <jason.low2@hpe.com>
      [ Build warning fixes on various architectures. ]
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Jason Low <jason.low2@hp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Hurley <peter@hurleysoftware.com>
      Cc: Terry Rudd <terry.rudd@hpe.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Waiman Long <Waiman.Long@hpe.com>
      Link: http://lkml.kernel.org/r/1465017963-4839-2-git-send-email-jason.low2@hpe.comSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      8ee62b18
    • Peter Zijlstra's avatar
      locking/qspinlock: Fix spin_unlock_wait() some more · 2c610022
      Peter Zijlstra authored
      While this prior commit:
      
        54cf809b ("locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait()")
      
      ... fixes spin_is_locked() and spin_unlock_wait() for the usage
      in ipc/sem and netfilter, it does not in fact work right for the
      usage in task_work and futex.
      
      So while the 2 locks crossed problem:
      
      	spin_lock(A)		spin_lock(B)
      	if (!spin_is_locked(B)) spin_unlock_wait(A)
      	  foo()			foo();
      
      ... works with the smp_mb() injected by both spin_is_locked() and
      spin_unlock_wait(), this is not sufficient for:
      
      	flag = 1;
      	smp_mb();		spin_lock()
      	spin_unlock_wait()	if (!flag)
      				  // add to lockless list
      	// iterate lockless list
      
      ... because in this scenario, the store from spin_lock() can be delayed
      past the load of flag, uncrossing the variables and loosing the
      guarantee.
      
      This patch reworks spin_is_locked() and spin_unlock_wait() to work in
      both cases by exploiting the observation that while the lock byte
      store can be delayed, the contender must have registered itself
      visibly in other state contained in the word.
      
      It also allows for architectures to override both functions, as PPC
      and ARM64 have an additional issue for which we currently have no
      generic solution.
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Giovanni Gherdovich <ggherdovich@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman Long <waiman.long@hpe.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: stable@vger.kernel.org # v4.2 and later
      Fixes: 54cf809b ("locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait()")
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      2c610022
  14. 03 Jun, 2016 2 commits
  15. 31 May, 2016 2 commits
  16. 20 May, 2016 1 commit
  17. 19 May, 2016 1 commit
    • Hugh Dickins's avatar
      arch: fix has_transparent_hugepage() · fd8cfd30
      Hugh Dickins authored
      I've just discovered that the useful-sounding has_transparent_hugepage()
      is actually an architecture-dependent minefield: on some arches it only
      builds if CONFIG_TRANSPARENT_HUGEPAGE=y, on others it's also there when
      not, but on some of those (arm and arm64) it then gives the wrong
      answer; and on mips alone it's marked __init, which would crash if
      called later (but so far it has not been called later).
      
      Straighten this out: make it available to all configs, with a sensible
      default in asm-generic/pgtable.h, removing its definitions from those
      arches (arc, arm, arm64, sparc, tile) which are served by the default,
      adding #define has_transparent_hugepage has_transparent_hugepage to
      those (mips, powerpc, s390, x86) which need to override the default at
      runtime, and removing the __init from mips (but maybe that kind of code
      should be avoided after init: set a static variable the first time it's
      called).
      Signed-off-by: 's avatarHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Acked-by: 's avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: Vineet Gupta <vgupta@synopsys.com>		[arch/arc]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>	[arch/s390]
      Acked-by: 's avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: 's avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      fd8cfd30
  18. 17 May, 2016 1 commit
  19. 13 May, 2016 2 commits
    • James Hogan's avatar
      SIGNAL: Move generic copy_siginfo() to signal.h · ca9eb49a
      James Hogan authored
      The generic copy_siginfo() is currently defined in
      asm-generic/siginfo.h, after including uapi/asm-generic/siginfo.h which
      defines the generic struct siginfo. However this makes it awkward for an
      architecture to use it if it has to define its own struct siginfo (e.g.
      MIPS and potentially IA64), since it means that asm-generic/siginfo.h
      can only be included after defining the arch-specific siginfo, which may
      be problematic if the arch-specific definition needs definitions from
      uapi/asm-generic/siginfo.h.
      
      It is possible to work around this by first including
      uapi/asm-generic/siginfo.h to get the constants before defining the
      arch-specific siginfo, and include asm-generic/siginfo.h after. However
      uapi headers can't be included by other uapi headers, so that first
      include has to be in an ifdef __kernel__, with the non __kernel__ case
      including the non-UAPI header instead.
      
      Instead of that mess, move the generic copy_siginfo() definition into
      linux/signal.h, which allows an arch-specific uapi/asm/siginfo.h to
      include asm-generic/siginfo.h and define the arch-specific siginfo, and
      for the generic copy_siginfo() to see that arch-specific definition.
      Signed-off-by: 's avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Petr Malat <oss@malat.biz>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Christopher Ferris <cferris@google.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: linux-ia64@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 4.0-
      Patchwork: https://patchwork.linux-mips.org/patch/12478/Signed-off-by: 's avatarRalf Baechle <ralf@linux-mips.org>
      ca9eb49a
    • Matt Redfearn's avatar
      seccomp: Get compat syscalls from asm-generic header · c983f0e8
      Matt Redfearn authored
      Move retrieval of compat syscall numbers into inline function defined in
      asm-generic header so that arches may override it.
      
      [ralf@linux-mips.org: Resolve merge conflict.]
      Suggested-by: 's avatarPaul Burton <paul.burton@imgtec.com>
      Signed-off-by: 's avatarMatt Redfearn <matt.redfearn@imgtec.com>
      Acked-by: 's avatarKees Cook <keescook@chromium.org>
      Cc: IMG-MIPSLinuxKerneldevelopers@imgtec.com
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Will Drewry <wad@chromium.org>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/12978/Signed-off-by: 's avatarRalf Baechle <ralf@linux-mips.org>
      c983f0e8
  20. 03 May, 2016 1 commit
    • Robin Murphy's avatar
      io-64-nonatomic: Add relaxed accessor variants · e511267b
      Robin Murphy authored
      Whilst commit 9439eb3a ("asm-generic: io: implement relaxed
      accessor macros as conditional wrappers") makes the *_relaxed forms of
      I/O accessors universally available to drivers, in cases where writeq()
      is implemented via the io-64-nonatomic helpers, writeq_relaxed() will
      end up falling back to writel() regardless of whether writel_relaxed()
      is available (identically for s/write/read/).
      
      Add corresponding relaxed forms of the nonatomic helpers to delegate
      to the equivalent 32-bit accessors as appropriate. We also need to fix
      io.h to avoid defining default relaxed variants if the basic accessors
      themselves don't exist.
      
      CC: Christoph Hellwig <hch@lst.de>
      CC: Darren Hart <dvhart@linux.intel.com>
      CC: Hitoshi Mitake <mitake.hitoshi@lab.ntt.co.jp>
      Acked-by: 's avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: 's avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: 's avatarWill Deacon <will.deacon@arm.com>
      e511267b
  21. 21 Apr, 2016 1 commit
  22. 13 Apr, 2016 3 commits
    • Borislav Petkov's avatar
      x86/asm: Make sure verify_cpu() has a good stack · 91ed140d
      Borislav Petkov authored
      04633df0 ("x86/cpu: Call verify_cpu() after having entered long mode too")
      added the call to verify_cpu() for sanitizing CPU configuration.
      
      The latter uses the stack minimally and it can happen that we land in
      startup_64() directly from a 64-bit bootloader. Then we want to use our
      own, known good stack.
      
      Do that.
      
      APs don't need this as the trampoline sets up a stack for them.
      Reported-by: 's avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: 's avatarBorislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mika Penttilä <mika.penttila@nextfour.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1459434062-31055-1-git-send-email-bp@alien8.deSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      91ed140d
    • Michal Hocko's avatar
      locking/rwsem: Introduce basis for down_write_killable() · d4799608
      Michal Hocko authored
      Introduce a generic implementation necessary for down_write_killable().
      
      This is a trivial extension of the already existing down_write() call
      which can be interrupted by SIGKILL.  This patch doesn't provide
      down_write_killable() yet because arches have to provide the necessary
      pieces before.
      
      rwsem_down_write_failed() which is a generic slow path for the
      write lock is extended to take a task state and renamed to
      __rwsem_down_write_failed_common(). The return value is either a valid
      semaphore pointer or ERR_PTR(-EINTR).
      
      rwsem_down_write_failed_killable() is exported as a new way to wait for
      the lock and be killable.
      
      For rwsem-spinlock implementation the current __down_write() it updated
      in a similar way as __rwsem_down_write_failed_common() except it doesn't
      need new exports just visible __down_write_killable().
      
      Architectures which are not using the generic rwsem implementation are
      supposed to provide their __down_write_killable() implementation and
      use rwsem_down_write_failed_killable() for the slow path.
      Signed-off-by: 's avatarMichal Hocko <mhocko@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
      Cc: Signed-off-by: Jason Low <jason.low2@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: linux-alpha@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: linux-ia64@vger.kernel.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: sparclinux@vger.kernel.org
      Link: http://lkml.kernel.org/r/1460041951-22347-7-git-send-email-mhocko@kernel.orgSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      d4799608
    • Michal Hocko's avatar
      locking/rwsem: Get rid of __down_write_nested() · f8e04d85
      Michal Hocko authored
      This is no longer used anywhere and all callers (__down_write()) use
      0 as a subclass. Ditch __down_write_nested() to make the code easier
      to follow.
      
      This shouldn't introduce any functional change.
      Signed-off-by: 's avatarMichal Hocko <mhocko@suse.com>
      Acked-by: 's avatarDavidlohr Bueso <dave@stgolabs.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
      Cc: Signed-off-by: Jason Low <jason.low2@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: linux-alpha@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: linux-ia64@vger.kernel.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: sparclinux@vger.kernel.org
      Link: http://lkml.kernel.org/r/1460041951-22347-2-git-send-email-mhocko@kernel.orgSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      f8e04d85
  23. 25 Mar, 2016 1 commit
  24. 21 Mar, 2016 1 commit
    • Peter Zijlstra's avatar
      bitops: Do not default to __clear_bit() for __clear_bit_unlock() · f75d4864
      Peter Zijlstra authored
      __clear_bit_unlock() is a special little snowflake. While it carries the
      non-atomic '__' prefix, it is specifically documented to pair with
      test_and_set_bit() and therefore should be 'somewhat' atomic.
      
      Therefore the generic implementation of __clear_bit_unlock() cannot use
      the fully non-atomic __clear_bit() as a default.
      
      If an arch is able to do better; is must provide an implementation of
      __clear_bit_unlock() itself.
      
      Specifically, this came up as a result of hackbench livelock'ing in
      slab_lock() on ARC with SMP + SLUB + !LLSC.
      
      The issue was incorrect pairing of atomic ops.
      
       slab_lock() -> bit_spin_lock() -> test_and_set_bit()
       slab_unlock() -> __bit_spin_unlock() -> __clear_bit()
      
      The non serializing __clear_bit() was getting "lost"
      
       80543b8e:	ld_s       r2,[r13,0] <--- (A) Finds PG_locked is set
       80543b90:	or         r3,r2,1    <--- (B) other core unlocks right here
       80543b94:	st_s       r3,[r13,0] <--- (C) sets PG_locked (overwrites unlock)
      
      Fixes ARC STAR 9000817404 (and probably more).
      Reported-by: 's avatarVineet Gupta <Vineet.Gupta1@synopsys.com>
      Tested-by: 's avatarVineet Gupta <Vineet.Gupta1@synopsys.com>
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: James E.J. Bottomley <jejb@parisc-linux.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Noam Camus <noamc@ezchip.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20160309114054.GJ6356@twins.programming.kicks-ass.netSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      f75d4864
  25. 17 Mar, 2016 2 commits