1. 18 Nov, 2015 1 commit
  2. 17 Nov, 2015 1 commit
    • Lorenzo Pieralisi's avatar
      arm64: kernel: pause/unpause function graph tracer in cpu_suspend() · de818bd4
      Lorenzo Pieralisi authored
      The function graph tracer adds instrumentation that is required to trace
      both entry and exit of a function. In particular the function graph
      tracer updates the "return address" of a function in order to insert
      a trace callback on function exit.
      
      Kernel power management functions like cpu_suspend() are called
      upon power down entry with functions called "finishers" that are in turn
      called to trigger the power down sequence but they may not return to the
      kernel through the normal return path.
      
      When the core resumes from low-power it returns to the cpu_suspend()
      function through the cpu_resume path, which leaves the trace stack frame
      set-up by the function tracer in an incosistent state upon return to the
      kernel when tracing is enabled.
      
      This patch fixes the issue by pausing/resuming the function graph
      tracer on the thread executing cpu_suspend() (ie the function call that
      subsequently triggers the "suspend finishers"), so that the function graph
      tracer state is kept consistent across functions that enter power down
      states and never return by effectively disabling graph tracer while they
      are executing.
      
      Fixes: 819e50e2
      
       ("arm64: Add ftrace support")
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Reported-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: default avatarAKASHI Takahiro <takahiro.akashi@linaro.org>
      Suggested-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org> # 3.16+
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      de818bd4
  3. 12 Nov, 2015 4 commits
  4. 30 Oct, 2015 1 commit
  5. 29 Oct, 2015 2 commits
  6. 28 Oct, 2015 5 commits
    • Will Deacon's avatar
      arm64: cpufeature: declare enable_cpu_capabilities as static · fde4a59f
      Will Deacon authored
      
      
      enable_cpu_capabilities is only called from within cpufeature.c, so it
      can be declared static.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      fde4a59f
    • Will Deacon's avatar
      Revert "ARM64: unwind: Fix PC calculation" · 9702970c
      Will Deacon authored
      This reverts commit e306dfd0
      
      .
      
      With this patch applied, we were the only architecture making this sort
      of adjustment to the PC calculation in the unwinder. This causes
      problems for ftrace, where the PC values are matched against the
      contents of the stack frames in the callchain and fail to match any
      records after the address adjustment.
      
      Whilst there has been some effort to change ftrace to workaround this,
      those patches are not yet ready for mainline and, since we're the odd
      architecture in this regard, let's just step in line with other
      architectures (like arch/arm/) for now.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      9702970c
    • Lorenzo Pieralisi's avatar
      arm64: kernel: fix tcr_el1.t0sz restore on systems with extended idmap · e13d918a
      Lorenzo Pieralisi authored
      Commit dd006da2 ("arm64: mm: increase VA range of identity map")
      introduced a mechanism to extend the virtual memory map range
      to support arm64 systems with system RAM located at very high offset,
      where the identity mapping used to enable/disable the MMU requires
      additional translation levels to map the physical memory at an equal
      virtual offset.
      
      The kernel detects at boot time the tcr_el1.t0sz value required by the
      identity mapping and sets-up the tcr_el1.t0sz register field accordingly,
      any time the identity map is required in the kernel (ie when enabling the
      MMU).
      
      After enabling the MMU, in the cold boot path the kernel resets the
      tcr_el1.t0sz to its default value (ie the actual configuration value for
      the system virtual address space) so that after enabling the MMU the
      memory space translated by ttbr0_el1 is restored as expected.
      
      Commit dd006da2 ("arm64: mm: increase VA range of identity map")
      also added code to set-up the tcr_el1.t0sz value when the kernel resumes
      from low-power states with the MMU off through cpu_resume() in order to
      effectively use the identity mapping to enable the MMU but failed to add
      the code required to restore the tcr_el1.t0sz to its default value, when
      the core returns to the kernel with the MMU enabled, so that the kernel
      might end up running with tcr_el1.t0sz value set-up for the identity
      mapping which can be lower than the value required by the actual virtual
      address space, resulting in an erroneous set-up.
      
      This patchs adds code in the resume path that restores the tcr_el1.t0sz
      default value upon core resume, mirroring this way the cold boot path
      behaviour therefore fixing the issue.
      
      Cc: <stable@vger.kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Fixes: dd006da2
      
       ("arm64: mm: increase VA range of identity map")
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      e13d918a
    • Will Deacon's avatar
      arm64: compat: fix stxr failure case in SWP emulation · 589cb22b
      Will Deacon authored
      If the STXR instruction fails in the SWP emulation code, we leave *data
      overwritten with the loaded value, therefore corrupting the data written
      by a subsequent, successful attempt.
      
      This patch re-jigs the code so that we only write back to *data once we
      know that the update has happened.
      
      Cc: <stable@vger.kernel.org>
      Fixes: bd35a4ad
      
       ("arm64: Port SWP/SWPB emulation support from arm")
      Reported-by: default avatarShengjiu Wang <shengjiu.wang@freescale.com>
      Reported-by: default avatarVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      589cb22b
    • Ard Biesheuvel's avatar
      efi: Use correct type for struct efi_memory_map::phys_map · 44511fb9
      Ard Biesheuvel authored
      We have been getting away with using a void* for the physical
      address of the UEFI memory map, since, even on 32-bit platforms
      with 64-bit physical addresses, no truncation takes place if the
      memory map has been allocated by the firmware (which only uses
      1:1 virtually addressable memory), which is usually the case.
      
      However, commit:
      
        0f96a99d
      
       ("efi: Add "efi_fake_mem" boot option")
      
      adds code that clones and modifies the UEFI memory map, and the
      clone may live above 4 GB on 32-bit platforms.
      
      This means our use of void* for struct efi_memory_map::phys_map has
      graduated from 'incorrect but working' to 'incorrect and
      broken', and we need to fix it.
      
      So redefine struct efi_memory_map::phys_map as phys_addr_t, and
      get rid of a bunch of casts that are now unneeded.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarMatt Fleming <matt@codeblueprint.co.uk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: izumi.taku@jp.fujitsu.com
      Cc: kamezawa.hiroyu@jp.fujitsu.com
      Cc: linux-efi@vger.kernel.org
      Cc: matt.fleming@intel.com
      Link: http://lkml.kernel.org/r/1445593697-1342-1-git-send-email-ard.biesheuvel@linaro.org
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      44511fb9
  7. 21 Oct, 2015 17 commits
  8. 19 Oct, 2015 6 commits
  9. 16 Oct, 2015 2 commits
  10. 12 Oct, 2015 1 commit
    • Andrey Ryabinin's avatar
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin authored
      
      
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      39d114dd