1. 19 Oct, 2015 1 commit
  2. 07 Oct, 2015 2 commits
  3. 20 Aug, 2015 1 commit
  4. 05 Aug, 2015 1 commit
    • Will Deacon's avatar
      arm64: mm: ensure patched kernel text is fetched from PoU · 8ec41987
      Will Deacon authored
      
      
      The arm64 booting document requires that the bootloader has cleaned the
      kernel image to the PoC. However, when a CPU re-enters the kernel due to
      either a CPU hotplug "on" event or resuming from a low-power state (e.g.
      cpuidle), the kernel text may in-fact be dirty at the PoU due to things
      like alternative patching or even module loading.
      
      Thanks to I-cache speculation with the MMU off, stale instructions could
      be fetched prior to enabling the MMU, potentially leading to crashes
      when executing regions of code that have been modified at runtime.
      
      This patch addresses the issue by ensuring that the local I-cache is
      invalidated immediately after a CPU has enabled its MMU but before
      jumping out of the identity mapping. Any stale instructions fetched from
      the PoC will then be discarded and refetched correctly from the PoU.
      Patching kernel text executed prior to the MMU being enabled is
      prohibited, so the early entry code will always be clean.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      8ec41987
  5. 27 Jul, 2015 2 commits
    • Will Deacon's avatar
      arm64: force CONFIG_SMP=y and remove redundant #ifdefs · 4b3dc967
      Will Deacon authored
      
      
      Nobody seems to be producing !SMP systems anymore, so this is just
      becoming a source of kernel bugs, particularly if people want to use
      coherent DMA with non-shared pages.
      
      This patch forces CONFIG_SMP=y for arm64, removing a modest amount of
      code in the process.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      4b3dc967
    • Catalin Marinas's avatar
      arm64: Add support for hardware updates of the access and dirty pte bits · 2f4b829c
      Catalin Marinas authored
      
      
      The ARMv8.1 architecture extensions introduce support for hardware
      updates of the access and dirty information in page table entries. With
      TCR_EL1.HA enabled, when the CPU accesses an address with the PTE_AF bit
      cleared in the page table, instead of raising an access flag fault the
      CPU sets the actual page table entry bit. To ensure that kernel
      modifications to the page tables do not inadvertently revert a change
      introduced by hardware updates, the exclusive monitor (ldxr/stxr) is
      adopted in the pte accessors.
      
      When TCR_EL1.HD is enabled, a write access to a memory location with the
      DBM (Dirty Bit Management) bit set in the corresponding pte
      automatically clears the read-only bit (AP[2]). Such DBM bit maps onto
      the Linux PTE_WRITE bit and to check whether a writable (DBM set) page
      is dirty, the kernel tests the PTE_RDONLY bit. In order to allow
      read-only and dirty pages, the kernel needs to preserve the software
      dirty bit. The hardware dirty status is transferred to the software
      dirty bit in ptep_set_wrprotect() (using load/store exclusive loop) and
      pte_modify().
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      2f4b829c
  6. 19 May, 2015 1 commit
  7. 23 Mar, 2015 1 commit
  8. 27 Jan, 2015 1 commit
    • Lorenzo Pieralisi's avatar
      arm64: kernel: remove ARM64_CPU_SUSPEND config option · af3cfdbf
      Lorenzo Pieralisi authored
      
      
      ARM64_CPU_SUSPEND config option was introduced to make code providing
      context save/restore selectable only on platforms requiring power
      management capabilities.
      
      Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which
      in turn is set by the SUSPEND config option.
      
      The introduction of CPU_IDLE for arm64 requires that code configured
      by ARM64_CPU_SUSPEND (context save/restore) should be compiled in
      in order to enable the CPU idle driver to rely on CPU operations
      carrying out context save/restore.
      
      The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore
      forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP)
      failed dependencies, which is not a clean way of handling the kernel
      configuration option.
      
      For these reasons, this patch removes the ARM64_CPU_SUSPEND config option
      and makes the context save/restore dependent on CPU_PM, which is selected
      whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies
      in the process.
      
      This way, code previously configured through ARM64_CPU_SUSPEND is
      compiled in whenever a power management subsystem requires it to be
      present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour
      expected on ARM64 kernels.
      
      The cpu_suspend and cpu_init_idle CPU operations are added only if
      CPU_IDLE is selected, since they are CPU_IDLE specific methods and
      should be grouped and defined accordingly.
      
      PSCI CPU operations are updated to reflect the introduced changes.
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      af3cfdbf
  9. 23 Jan, 2015 1 commit
  10. 08 Sep, 2014 1 commit
    • Arun Chandran's avatar
      arm64: convert part of soft_restart() to assembly · 5e051531
      Arun Chandran authored
      
      
      The current soft_restart() and setup_restart implementations incorrectly
      assume that compiler will not spill/fill values to/from stack. However
      this assumption seems to be wrong, revealed by the disassembly of the
      currently existing code (v3.16) built with Linaro GCC 4.9-2014.05.
      
      ffffffc000085224 <soft_restart>:
      ffffffc000085224:  a9be7bfd  stp    x29, x30, [sp,#-32]!
      ffffffc000085228:  910003fd  mov    x29, sp
      ffffffc00008522c:  f9000fa0  str    x0, [x29,#24]
      ffffffc000085230:  94003d21  bl     ffffffc0000946b4 <setup_mm_for_reboot>
      ffffffc000085234:  94003b33  bl     ffffffc000093f00 <flush_cache_all>
      ffffffc000085238:  94003dfa  bl     ffffffc000094a20 <cpu_cache_off>
      ffffffc00008523c:  94003b31  bl     ffffffc000093f00 <flush_cache_all>
      ffffffc000085240:  b0003321  adrp   x1, ffffffc0006ea000 <reset_devices>
      
      ffffffc000085244:  f9400fa0  ldr    x0, [x29,#24] ----> spilled addr
      ffffffc000085248:  f942fc22  ldr    x2, [x1,#1528] ----> global memstart_addr
      
      ffffffc00008524c:  f0000061  adrp   x1, ffffffc000094000 <__inval_cache_range+0x40>
      ffffffc000085250:  91290021  add    x1, x1, #0xa40
      ffffffc000085254:  8b010041  add    x1, x2, x1
      ffffffc000085258:  d2c00802  mov    x2, #0x4000000000           // #274877906944
      ffffffc00008525c:  8b020021  add    x1, x1, x2
      ffffffc000085260:  d63f0020  blr    x1
      ...
      
      Here the compiler generates memory accesses after the cache is disabled,
      loading stale values for the spilled value and global variable. As we cannot
      control when the compiler will access memory we must rewrite the
      functions in assembly to stash values we need in registers prior to
      disabling the cache, avoiding the use of memory.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarArun Chandran <achandran@mvista.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      5e051531
  11. 09 May, 2014 1 commit
  12. 03 Apr, 2014 1 commit
  13. 13 Mar, 2014 1 commit
  14. 03 Mar, 2014 1 commit
    • Mark Rutland's avatar
      arm64: remove unnecessary cache flush at boot · bff70595
      Mark Rutland authored
      
      
      Currently we flush the entire dcache at boot within __cpu_setup, but
      this is unnecessary as the booting protocol demands that the dcache is
      invalid and off upon entering the kernel. The presence of the cache
      flush only serves to hide bugs in bootloaders, and is not safe in the
      presence of SMP.
      
      In an SMP boot scenario the CPUs enter coherency outside of the kernel,
      and the primary CPU enables its caches before bringing up secondary
      CPUs. Therefore if any secondary CPU has an entry in its cache (in
      violation of the boot protocol), the primary CPU might snoop it even if
      the secondary CPU's cache is disabled. The boot-time cache flush only
      serves to hide a firmware bug, and slows down a cpu boot unnecessarily.
      
      This patch removes the unnecessary boot-time cache flush.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: make __flush_dcache_all local only]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      bff70595
  15. 27 Jan, 2014 1 commit
  16. 16 Dec, 2013 1 commit
    • Lorenzo Pieralisi's avatar
      arm64: kernel: suspend/resume registers save/restore · 6732bc65
      Lorenzo Pieralisi authored
      
      
      Power management software requires the kernel to save and restore
      CPU registers while going through suspend and resume operations
      triggered by kernel subsystems like CPU idle and suspend to RAM.
      
      This patch implements code that provides save and restore mechanism
      for the arm v8 implementation. Memory for the context is passed as
      parameter to both cpu_do_suspend and cpu_do_resume functions, and allows
      the callers to implement context allocation as they deem fit.
      
      The registers that are saved and restored correspond to the registers set
      actually required by the kernel to be up and running which represents a
      subset of v8 ISA.
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      6732bc65
  17. 06 Dec, 2013 1 commit
    • Mark Rutland's avatar
      arm64: ensure completion of TLB invalidatation · 3cea71bc
      Mark Rutland authored
      
      
      Currently there is no dsb between the tlbi in __cpu_setup and the write
      to SCTLR_EL1 which enables the MMU in __turn_mmu_on. This means that the
      TLB invalidation is not guaranteed to have completed at the point
      address translation is enabled, leading to a number of possible issues
      including incorrect translations and TLB conflict faults.
      
      This patch moves the tlbi in __cpu_setup above an existing dsb used to
      synchronise I-cache invalidation, ensuring that the TLBs have been
      invalidated at the point the MMU is enabled.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      3cea71bc
  18. 25 Oct, 2013 1 commit
  19. 03 Sep, 2013 1 commit
  20. 02 Sep, 2013 1 commit
  21. 13 May, 2013 1 commit
    • Will Deacon's avatar
      arm64: debug: clear mdscr_el1 instead of taking the OS lock · 9c413e25
      Will Deacon authored
      
      
      During boot, we take the debug OS lock before interrupts are enabled.
      This is required to prevent clearing of PSTATE.D on the interrupt entry
      path, which could result in spurious debug exceptions before we've got
      round to resetting things like the hardware breakpoints registers to a
      sane state.
      
      A problem with this approach is that taking the OS lock prevents an
      external JTAG debugger from debugging the system, which is especially
      irritating during boot, where JTAG debugging can be most useful.
      
      This patch clears mdscr_el1 rather than taking the lock, clearing the
      MDE and KDE bits and preventing self-hosted hardware debug exceptions
      from occurring.
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: stable@vger.kernel.org
      9c413e25
  22. 24 Sep, 2012 1 commit
  23. 17 Sep, 2012 1 commit