1. 28 Apr, 2016 2 commits
  2. 26 Feb, 2016 1 commit
  3. 16 Feb, 2016 1 commit
    • Mark Rutland's avatar
      arm64: mm: add code to safely replace TTBR1_EL1 · 50e1881d
      Mark Rutland authored
      If page tables are modified without suitable TLB maintenance, the ARM
      architecture permits multiple TLB entries to be allocated for the same
      VA. When this occurs, it is permitted that TLB conflict aborts are
      raised in response to synchronous data/instruction accesses, and/or and
      amalgamation of the TLB entries may be used as a result of a TLB lookup.
      
      The presence of conflicting TLB entries may result in a variety of
      behaviours detrimental to the system (e.g. erroneous physical addresses
      may be used by I-cache fetches and/or page table walks). Some of these
      cases may result in unexpected changes of hardware state, and/or result
      in the (asynchronous) delivery of SError.
      
      To avoid these issues, we must avoid situations where conflicting
      entries may be allocated into TLBs. For user and module mappings we can
      follow a strict break-before-make approach, but this cannot work for
      modifications to the swapper page tables that cover the kernel text and
      data.
      
      Instead, this patch adds code which is intended to be executed from the
      idmap, which can safely unmap the swapper page tables as it only
      requires the idmap to be active. This enables us to uninstall the active
      TTBR1_EL1 entry, invalidate TLBs, then install a new TTBR1_EL1 entry
      without potentially unmapping code or data required for the sequence.
      This avoids the risk of conflict, but requires that updates are staged
      in a copy of the swapper page tables prior to being installed.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      50e1881d
  4. 25 Jan, 2016 1 commit
  5. 21 Dec, 2015 1 commit
    • Lorenzo Pieralisi's avatar
      arm64: kernel: enforce pmuserenr_el0 initialization and restore · 60792ad3
      Lorenzo Pieralisi authored
      The pmuserenr_el0 register value is architecturally UNKNOWN on reset.
      Current kernel code resets that register value iff the core pmu device is
      correctly probed in the kernel. On platforms with missing DT pmu nodes (or
      disabled perf events in the kernel), the pmu is not probed, therefore the
      pmuserenr_el0 register is not reset in the kernel, which means that its
      value retains the reset value that is architecturally UNKNOWN (system
      may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access
      is available at EL0, which must be disallowed).
      
      This patch adds code that resets pmuserenr_el0 on cold boot and restores
      it on core resume from shutdown, so that the pmuserenr_el0 setup is
      always enforced in the kernel.
      
      Cc: <stable@vger.kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      60792ad3
  6. 11 Dec, 2015 1 commit
    • Mark Rutland's avatar
      arm64: mm: place __cpu_setup in .text · f00083ca
      Mark Rutland authored
      We drop __cpu_setup in .text.init, which ends up being part of .text.
      The .text.init section was a legacy section name which has been unused
      elsewhere for a long time.
      
      The ".text.init" name is misleading if read as a synonym for
      ".init.text". Any CPU may execute __cpu_setup before turning the MMU on,
      so it should simply live in .text.
      
      Remove the pointless section assignment. This will leave __cpu_setup in
      the .text section.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      f00083ca
  7. 19 Oct, 2015 1 commit
  8. 07 Oct, 2015 2 commits
  9. 20 Aug, 2015 1 commit
  10. 08 Aug, 2015 1 commit
  11. 05 Aug, 2015 1 commit
    • Will Deacon's avatar
      arm64: mm: ensure patched kernel text is fetched from PoU · 8ec41987
      Will Deacon authored
      The arm64 booting document requires that the bootloader has cleaned the
      kernel image to the PoC. However, when a CPU re-enters the kernel due to
      either a CPU hotplug "on" event or resuming from a low-power state (e.g.
      cpuidle), the kernel text may in-fact be dirty at the PoU due to things
      like alternative patching or even module loading.
      
      Thanks to I-cache speculation with the MMU off, stale instructions could
      be fetched prior to enabling the MMU, potentially leading to crashes
      when executing regions of code that have been modified at runtime.
      
      This patch addresses the issue by ensuring that the local I-cache is
      invalidated immediately after a CPU has enabled its MMU but before
      jumping out of the identity mapping. Any stale instructions fetched from
      the PoC will then be discarded and refetched correctly from the PoU.
      Patching kernel text executed prior to the MMU being enabled is
      prohibited, so the early entry code will always be clean.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      8ec41987
  12. 27 Jul, 2015 2 commits
    • Will Deacon's avatar
      arm64: force CONFIG_SMP=y and remove redundant #ifdefs · 4b3dc967
      Will Deacon authored
      Nobody seems to be producing !SMP systems anymore, so this is just
      becoming a source of kernel bugs, particularly if people want to use
      coherent DMA with non-shared pages.
      
      This patch forces CONFIG_SMP=y for arm64, removing a modest amount of
      code in the process.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      4b3dc967
    • Catalin Marinas's avatar
      arm64: Add support for hardware updates of the access and dirty pte bits · 2f4b829c
      Catalin Marinas authored
      The ARMv8.1 architecture extensions introduce support for hardware
      updates of the access and dirty information in page table entries. With
      TCR_EL1.HA enabled, when the CPU accesses an address with the PTE_AF bit
      cleared in the page table, instead of raising an access flag fault the
      CPU sets the actual page table entry bit. To ensure that kernel
      modifications to the page tables do not inadvertently revert a change
      introduced by hardware updates, the exclusive monitor (ldxr/stxr) is
      adopted in the pte accessors.
      
      When TCR_EL1.HD is enabled, a write access to a memory location with the
      DBM (Dirty Bit Management) bit set in the corresponding pte
      automatically clears the read-only bit (AP[2]). Such DBM bit maps onto
      the Linux PTE_WRITE bit and to check whether a writable (DBM set) page
      is dirty, the kernel tests the PTE_RDONLY bit. In order to allow
      read-only and dirty pages, the kernel needs to preserve the software
      dirty bit. The hardware dirty status is transferred to the software
      dirty bit in ptep_set_wrprotect() (using load/store exclusive loop) and
      pte_modify().
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      2f4b829c
  13. 19 May, 2015 1 commit
  14. 23 Mar, 2015 1 commit
  15. 27 Jan, 2015 1 commit
    • Lorenzo Pieralisi's avatar
      arm64: kernel: remove ARM64_CPU_SUSPEND config option · af3cfdbf
      Lorenzo Pieralisi authored
      ARM64_CPU_SUSPEND config option was introduced to make code providing
      context save/restore selectable only on platforms requiring power
      management capabilities.
      
      Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which
      in turn is set by the SUSPEND config option.
      
      The introduction of CPU_IDLE for arm64 requires that code configured
      by ARM64_CPU_SUSPEND (context save/restore) should be compiled in
      in order to enable the CPU idle driver to rely on CPU operations
      carrying out context save/restore.
      
      The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore
      forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP)
      failed dependencies, which is not a clean way of handling the kernel
      configuration option.
      
      For these reasons, this patch removes the ARM64_CPU_SUSPEND config option
      and makes the context save/restore dependent on CPU_PM, which is selected
      whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies
      in the process.
      
      This way, code previously configured through ARM64_CPU_SUSPEND is
      compiled in whenever a power management subsystem requires it to be
      present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour
      expected on ARM64 kernels.
      
      The cpu_suspend and cpu_init_idle CPU operations are added only if
      CPU_IDLE is selected, since they are CPU_IDLE specific methods and
      should be grouped and defined accordingly.
      
      PSCI CPU operations are updated to reflect the introduced changes.
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      af3cfdbf
  16. 23 Jan, 2015 1 commit
  17. 08 Sep, 2014 1 commit
    • Arun Chandran's avatar
      arm64: convert part of soft_restart() to assembly · 5e051531
      Arun Chandran authored
      The current soft_restart() and setup_restart implementations incorrectly
      assume that compiler will not spill/fill values to/from stack. However
      this assumption seems to be wrong, revealed by the disassembly of the
      currently existing code (v3.16) built with Linaro GCC 4.9-2014.05.
      
      ffffffc000085224 <soft_restart>:
      ffffffc000085224:  a9be7bfd  stp    x29, x30, [sp,#-32]!
      ffffffc000085228:  910003fd  mov    x29, sp
      ffffffc00008522c:  f9000fa0  str    x0, [x29,#24]
      ffffffc000085230:  94003d21  bl     ffffffc0000946b4 <setup_mm_for_reboot>
      ffffffc000085234:  94003b33  bl     ffffffc000093f00 <flush_cache_all>
      ffffffc000085238:  94003dfa  bl     ffffffc000094a20 <cpu_cache_off>
      ffffffc00008523c:  94003b31  bl     ffffffc000093f00 <flush_cache_all>
      ffffffc000085240:  b0003321  adrp   x1, ffffffc0006ea000 <reset_devices>
      
      ffffffc000085244:  f9400fa0  ldr    x0, [x29,#24] ----> spilled addr
      ffffffc000085248:  f942fc22  ldr    x2, [x1,#1528] ----> global memstart_addr
      
      ffffffc00008524c:  f0000061  adrp   x1, ffffffc000094000 <__inval_cache_range+0x40>
      ffffffc000085250:  91290021  add    x1, x1, #0xa40
      ffffffc000085254:  8b010041  add    x1, x2, x1
      ffffffc000085258:  d2c00802  mov    x2, #0x4000000000           // #274877906944
      ffffffc00008525c:  8b020021  add    x1, x1, x2
      ffffffc000085260:  d63f0020  blr    x1
      ...
      
      Here the compiler generates memory accesses after the cache is disabled,
      loading stale values for the spilled value and global variable. As we cannot
      control when the compiler will access memory we must rewrite the
      functions in assembly to stash values we need in registers prior to
      disabling the cache, avoiding the use of memory.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarArun Chandran <achandran@mvista.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      5e051531
  18. 09 May, 2014 1 commit
  19. 03 Apr, 2014 1 commit
  20. 13 Mar, 2014 1 commit
  21. 03 Mar, 2014 1 commit
    • Mark Rutland's avatar
      arm64: remove unnecessary cache flush at boot · bff70595
      Mark Rutland authored
      Currently we flush the entire dcache at boot within __cpu_setup, but
      this is unnecessary as the booting protocol demands that the dcache is
      invalid and off upon entering the kernel. The presence of the cache
      flush only serves to hide bugs in bootloaders, and is not safe in the
      presence of SMP.
      
      In an SMP boot scenario the CPUs enter coherency outside of the kernel,
      and the primary CPU enables its caches before bringing up secondary
      CPUs. Therefore if any secondary CPU has an entry in its cache (in
      violation of the boot protocol), the primary CPU might snoop it even if
      the secondary CPU's cache is disabled. The boot-time cache flush only
      serves to hide a firmware bug, and slows down a cpu boot unnecessarily.
      
      This patch removes the unnecessary boot-time cache flush.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: make __flush_dcache_all local only]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      bff70595
  22. 27 Jan, 2014 1 commit
  23. 16 Dec, 2013 1 commit
    • Lorenzo Pieralisi's avatar
      arm64: kernel: suspend/resume registers save/restore · 6732bc65
      Lorenzo Pieralisi authored
      Power management software requires the kernel to save and restore
      CPU registers while going through suspend and resume operations
      triggered by kernel subsystems like CPU idle and suspend to RAM.
      
      This patch implements code that provides save and restore mechanism
      for the arm v8 implementation. Memory for the context is passed as
      parameter to both cpu_do_suspend and cpu_do_resume functions, and allows
      the callers to implement context allocation as they deem fit.
      
      The registers that are saved and restored correspond to the registers set
      actually required by the kernel to be up and running which represents a
      subset of v8 ISA.
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      6732bc65
  24. 06 Dec, 2013 1 commit
    • Mark Rutland's avatar
      arm64: ensure completion of TLB invalidatation · 3cea71bc
      Mark Rutland authored
      Currently there is no dsb between the tlbi in __cpu_setup and the write
      to SCTLR_EL1 which enables the MMU in __turn_mmu_on. This means that the
      TLB invalidation is not guaranteed to have completed at the point
      address translation is enabled, leading to a number of possible issues
      including incorrect translations and TLB conflict faults.
      
      This patch moves the tlbi in __cpu_setup above an existing dsb used to
      synchronise I-cache invalidation, ensuring that the TLBs have been
      invalidated at the point the MMU is enabled.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      3cea71bc
  25. 25 Oct, 2013 1 commit
  26. 03 Sep, 2013 1 commit
  27. 02 Sep, 2013 1 commit
  28. 13 May, 2013 1 commit
    • Will Deacon's avatar
      arm64: debug: clear mdscr_el1 instead of taking the OS lock · 9c413e25
      Will Deacon authored
      During boot, we take the debug OS lock before interrupts are enabled.
      This is required to prevent clearing of PSTATE.D on the interrupt entry
      path, which could result in spurious debug exceptions before we've got
      round to resetting things like the hardware breakpoints registers to a
      sane state.
      
      A problem with this approach is that taking the OS lock prevents an
      external JTAG debugger from debugging the system, which is especially
      irritating during boot, where JTAG debugging can be most useful.
      
      This patch clears mdscr_el1 rather than taking the lock, clearing the
      MDE and KDE bits and preventing self-hosted hardware debug exceptions
      from occurring.
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: stable@vger.kernel.org
      9c413e25
  29. 24 Sep, 2012 1 commit
  30. 17 Sep, 2012 1 commit