1. 15 Jul, 2016 1 commit
  2. 13 Jul, 2016 3 commits
  3. 12 Jul, 2016 1 commit
    • Sebastian Andrzej Siewior's avatar
      fcoe: convert to kworker · 4b9bc86d
      Sebastian Andrzej Siewior authored
      The driver creates its own per-CPU threads which are updated based on
      CPU hotplug events. It is also possible to use kworkers and remove some
      of the kthread infrastrucure.
      
      The code checked ->thread to decide if there is an active per-CPU
      thread. By using the kworker infrastructure this is no longer
      possible (or required). The thread pointer is saved in `kthread' instead
      of `thread' so anything trying to use thread is caught by the
      compiler. Currently only the bnx2fc driver is using struct fcoe_percpu_s
      and the kthread member.
      
      After a CPU went offline, we may still enqueue items on the "offline"
      CPU. This isn't much of a problem. The work will be done on a random
      CPU. The allocated crc_eof_page page won't be cleaned up. It is probably
      expected that the CPU comes up at some point so it should not be a
      problem. The crc_eof_page memory is released of course once the module
      is removed.
      
      This patch was only compile-tested due to -ENODEV.
      
      Cc: Vasu Dev <vasu.dev@intel.com>
      Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: fcoe-devel@open-fcoe.org
      Cc: linux-scsi@vger.kernel.org
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Tested-by: default avatarJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: default avatarJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      4b9bc86d
  4. 11 Jul, 2016 1 commit
  5. 01 Jul, 2016 5 commits
  6. 30 Jun, 2016 3 commits
  7. 29 Jun, 2016 8 commits
  8. 28 Jun, 2016 2 commits
  9. 27 Jun, 2016 1 commit
    • Sudeep Holla's avatar
      arm64: KVM: fix build with CONFIG_ARM_PMU disabled · 0efce9da
      Sudeep Holla authored
      When CONFIG_ARM_PMU is disabled, we get the following build error:
      
      arch/arm64/kvm/sys_regs.c: In function 'pmu_counter_idx_valid':
      arch/arm64/kvm/sys_regs.c:564:27: error: 'ARMV8_PMU_CYCLE_IDX' undeclared (first use in this function)
        if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX)
                                 ^
      arch/arm64/kvm/sys_regs.c:564:27: note: each undeclared identifier is reported only once for each function it appears in
      arch/arm64/kvm/sys_regs.c: In function 'access_pmu_evcntr':
      arch/arm64/kvm/sys_regs.c:592:10: error: 'ARMV8_PMU_CYCLE_IDX' undeclared (first use in this function)
          idx = ARMV8_PMU_CYCLE_IDX;
                ^
      arch/arm64/kvm/sys_regs.c: In function 'access_pmu_evtyper':
      arch/arm64/kvm/sys_regs.c:638:14: error: 'ARMV8_PMU_CYCLE_IDX' undeclared (first use in this function)
         if (idx == ARMV8_PMU_CYCLE_IDX)
                    ^
      arch/arm64/kvm/hyp/switch.c:86:15: error: 'ARMV8_PMU_USERENR_MASK' undeclared (first use in this function)
        write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
      
      This patch fixes the build with CONFIG_ARM_PMU disabled.
      
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarSudeep Holla <sudeep.holla@arm.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      0efce9da
  10. 24 Jun, 2016 6 commits
    • Kirill A. Shutemov's avatar
      Revert "mm: make faultaround produce old ptes" · 315d09bf
      Kirill A. Shutemov authored
      This reverts commit 5c0a85fa.
      
      The commit causes ~6% regression in unixbench.
      
      Let's revert it for now and consider other solution for reclaim problem
      later.
      
      Link: http://lkml.kernel.org/r/1465893750-44080-2-git-send-email-kirill.shutemov@linux.intel.comSigned-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reported-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      315d09bf
    • Andrey Ryabinin's avatar
      mm: mempool: kasan: don't poot mempool objects in quarantine · 9b75a867
      Andrey Ryabinin authored
      Currently we may put reserved by mempool elements into quarantine via
      kasan_kfree().  This is totally wrong since quarantine may really free
      these objects.  So when mempool will try to use such element,
      use-after-free will happen.  Or mempool may decide that it no longer
      need that element and double-free it.
      
      So don't put object into quarantine in kasan_kfree(), just poison it.
      Rename kasan_kfree() to kasan_poison_kfree() to respect that.
      
      Also, we shouldn't use kasan_slab_alloc()/kasan_krealloc() in
      kasan_unpoison_element() because those functions may update allocation
      stacktrace.  This would be wrong for the most of the remove_element call
      sites.
      
      (The only call site where we may want to update alloc stacktrace is
       in mempool_alloc(). Kmemleak solves this by calling
       kmemleak_update_trace(), so we could make something like that too.
       But this is out of scope of this patch).
      
      Fixes: 55834c59 ("mm: kasan: initial memory quarantine implementation")
      Link: http://lkml.kernel.org/r/575977C3.1010905@virtuozzo.comSigned-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reported-by: default avatarKuthonuzo Luruo <kuthonuzo.luruo@hpe.com>
      Acked-by: default avatarAlexander Potapenko <glider@google.com>
      Cc: Dmitriy Vyukov <dvyukov@google.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9b75a867
    • Linus Torvalds's avatar
      fix up initial thread stack pointer vs thread_info confusion · 7f1a00b6
      Linus Torvalds authored
      The INIT_TASK() initializer was similarly confused about the stack vs
      thread_info allocation that the allocators had, and that were fixed in
      commit b235beea ("Clarify naming of thread info/stack allocators").
      
      The task ->stack pointer only incidentally ends up having the same value
      as the thread_info, and in fact that will change.
      
      So fix the initial task struct initializer to point to 'init_stack'
      instead of 'init_thread_info', and make sure the ia64 definition for
      that exists.
      
      This actually makes the ia64 tsk->stack pointer be sensible for the
      initial task, but not for any other task.  As mentioned in commit
      b235beea, that whole pointer isn't actually used on ia64, since
      task_stack_page() there just points to the (single) allocation.
      
      All the other architectures seem to have copied the 'init_stack'
      definition, even if it tended to be generally unusued.
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7f1a00b6
    • Alan Stern's avatar
      USB: EHCI: declare hostpc register as zero-length array · 7e8b3dfe
      Alan Stern authored
      The HOSTPC extension registers found in some EHCI implementations form
      a variable-length array, with one element for each port.  Therefore
      the hostpc field in struct ehci_regs should be declared as a
      zero-length array, not a single-element array.
      
      This fixes a problem reported by UBSAN.
      Signed-off-by: default avatarAlan Stern <stern@rowland.harvard.edu>
      Reported-by: default avatarWilfried Klaebe <linux-kernel@lebenslange-mailadresse.de>
      Tested-by: default avatarWilfried Klaebe <linux-kernel@lebenslange-mailadresse.de>
      CC: <stable@vger.kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7e8b3dfe
    • Linus Torvalds's avatar
      Clarify naming of thread info/stack allocators · b235beea
      Linus Torvalds authored
      We've had the thread info allocated together with the thread stack for
      most architectures for a long time (since the thread_info was split off
      from the task struct), but that is about to change.
      
      But the patches that move the thread info to be off-stack (and a part of
      the task struct instead) made it clear how confused the allocator and
      freeing functions are.
      
      Because the common case was that we share an allocation with the thread
      stack and the thread_info, the two pointers were identical.  That
      identity then meant that we would have things like
      
      	ti = alloc_thread_info_node(tsk, node);
      	...
      	tsk->stack = ti;
      
      which certainly _worked_ (since stack and thread_info have the same
      value), but is rather confusing: why are we assigning a thread_info to
      the stack? And if we move the thread_info away, the "confusing" code
      just gets to be entirely bogus.
      
      So remove all this confusion, and make it clear that we are doing the
      stack allocation by renaming and clarifying the function names to be
      about the stack.  The fact that the thread_info then shares the
      allocation is an implementation detail, and not really about the
      allocation itself.
      
      This is a pure renaming and type fix: we pass in the same pointer, it's
      just that we clarify what the pointer means.
      
      The ia64 code that actually only has one single allocation (for all of
      task_struct, thread_info and kernel thread stack) now looks a bit odd,
      but since "tsk->stack" is actually not even used there, that oddity
      doesn't matter.  It would be a separate thing to clean that up, I
      intentionally left the ia64 changes as a pure brute-force renaming and
      type change.
      Acked-by: default avatarAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b235beea
    • Paolo Bonzini's avatar
      locking/static_key: Fix concurrent static_key_slow_inc() · 4c5ea0a9
      Paolo Bonzini authored
      The following scenario is possible:
      
          CPU 1                                   CPU 2
          static_key_slow_inc()
           atomic_inc_not_zero()
            -> key.enabled == 0, no increment
           jump_label_lock()
           atomic_inc_return()
            -> key.enabled == 1 now
                                                  static_key_slow_inc()
                                                   atomic_inc_not_zero()
                                                    -> key.enabled == 1, inc to 2
                                                   return
                                                  ** static key is wrong!
           jump_label_update()
           jump_label_unlock()
      
      Testing the static key at the point marked by (**) will follow the
      wrong path for jumps that have not been patched yet.  This can
      actually happen when creating many KVM virtual machines with userspace
      LAPIC emulation; just run several copies of the following program:
      
          #include <fcntl.h>
          #include <unistd.h>
          #include <sys/ioctl.h>
          #include <linux/kvm.h>
      
          int main(void)
          {
              for (;;) {
                  int kvmfd = open("/dev/kvm", O_RDONLY);
                  int vmfd = ioctl(kvmfd, KVM_CREATE_VM, 0);
                  close(ioctl(vmfd, KVM_CREATE_VCPU, 1));
                  close(vmfd);
                  close(kvmfd);
              }
              return 0;
          }
      
      Every KVM_CREATE_VCPU ioctl will attempt a static_key_slow_inc() call.
      The static key's purpose is to skip NULL pointer checks and indeed one
      of the processes eventually dereferences NULL.
      
      As explained in the commit that introduced the bug:
      
        706249c2 ("locking/static_keys: Rework update logic")
      
      jump_label_update() needs key.enabled to be true.  The solution adopted
      here is to temporarily make key.enabled == -1, and use go down the
      slow path when key.enabled <= 0.
      Reported-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <stable@vger.kernel.org> # v4.3+
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 706249c2 ("locking/static_keys: Rework update logic")
      Link: http://lkml.kernel.org/r/1466527937-69798-1-git-send-email-pbonzini@redhat.com
      [ Small stylistic edits to the changelog and the code. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      4c5ea0a9
  11. 23 Jun, 2016 4 commits
  12. 22 Jun, 2016 1 commit
  13. 19 Jun, 2016 1 commit
  14. 18 Jun, 2016 2 commits
  15. 17 Jun, 2016 1 commit