1. 26 Apr, 2016 5 commits
  2. 14 Apr, 2016 1 commit
    • Ard Biesheuvel's avatar
      arm64: move early boot code to the .init segment · 546c8c44
      Ard Biesheuvel authored
      Apart from the arm64/linux and EFI header data structures, there is nothing
      in the .head.text section that must reside at the beginning of the Image.
      So let's move it to the .init section where it belongs.
      Note that this involves some minor tweaking of the EFI header, primarily
      because the address of 'stext' no longer coincides with the start of the
      .text section. It also requires a couple of relocated symbol references
      to be slightly rewritten or their definition moved to the linker script.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
  3. 24 Mar, 2016 1 commit
  4. 21 Mar, 2016 1 commit
    • Mark Rutland's avatar
      arm64: fix KASLR boot-time I-cache maintenance · b90b4a60
      Mark Rutland authored
      Commit f80fb3a3 ("arm64: add support for kernel ASLR") missed a
      DSB necessary to complete I-cache maintenance in the primary boot path,
      and hence stale instructions may still be present in the I-cache and may
      be executed until the I-cache maintenance naturally completes.
      Since commit 8ec41987
       ("arm64: mm: ensure patched kernel text is
      fetched from PoU"), all CPUs invalidate their I-caches after their MMU
      is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may
      have been fetched from the PoC into I-caches. We never patch text
      expected to be executed with the MMU off. Thus, it is unnecessary to
      perform broadcast I-cache maintenance in the primary boot path.
      This patch reduces the scope of the I-cache maintenance to the local
      CPU, and adds the missing DSB with similar scope, matching prior
      maintenance in the primary boot path.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarArd Biesehvuel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
  5. 29 Feb, 2016 1 commit
  6. 25 Feb, 2016 1 commit
    • Suzuki K Poulose's avatar
      arm64: Handle early CPU boot failures · bb905274
      Suzuki K Poulose authored
      A secondary CPU could fail to come online due to insufficient
      capabilities and could simply die or loop in the kernel.
      e.g, a CPU with no support for the selected kernel PAGE_SIZE
      loops in kernel with MMU turned off.
      or a hotplugged CPU which doesn't have one of the advertised
      system capability will die during the activation.
      There is no way to synchronise the status of the failing CPU
      back to the master. This patch solves the issue by adding a
      field to the secondary_data which can be updated by the failing
      CPU. If the secondary CPU fails even before turning the MMU on,
      it updates the status in a special variable reserved in the head.txt
      section to make sure that the update can be cache invalidated safely
      without possible sharing of cache write back granule.
      Here are the possible states :
       -1. CPU_MMU_OFF - Initial value set by the master CPU, this value
      indicates that the CPU could not turn the MMU on, hence the status
      could not be reliably updated in the secondary_data. Instead, the
      CPU has updated the status @ __early_cpu_boot_status.
       0. CPU_BOOT_SUCCESS - CPU has booted successfully.
       1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the
      master CPU to synchronise by issuing a cpu_ops->cpu_kill.
       2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is
      looping in the kernel. This information could be used by say,
      kexec to check if it is really safe to do a kexec reboot.
       3. CPU_PANIC_KERNEL - CPU detected some serious issues which
      requires kernel to crash immediately. The secondary CPU cannot
      call panic() until it has initialised the GIC. This flag can
      be used to instruct the master to do so.
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      [catalin.marinas@arm.com: conflict resolution]
      [catalin.marinas@arm.com: converted "status" from int to long]
      [catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
  7. 24 Feb, 2016 4 commits
    • Ard Biesheuvel's avatar
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel authored
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    • Ard Biesheuvel's avatar
      arm64: add support for building vmlinux as a relocatable PIE binary · 1e48ef7f
      Ard Biesheuvel authored
      This implements CONFIG_RELOCATABLE, which links the final vmlinux
      image with a dynamic relocation section, allowing the early boot code
      to perform a relocation to a different virtual address at runtime.
      This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE).
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    • Ard Biesheuvel's avatar
      arm64: avoid dynamic relocations in early boot code · 2bf31a4a
      Ard Biesheuvel authored
      Before implementing KASLR for arm64 by building a self-relocating PIE
      executable, we have to ensure that values we use before the relocation
      routine is executed are not subject to dynamic relocation themselves.
      This applies not only to virtual addresses, but also to values that are
      supplied by the linker at build time and relocated using R_AARCH64_ABS64
      So instead, use assemble time constants, or force the use of static
      relocations by folding the constants into the instructions.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    • Ard Biesheuvel's avatar
      arm64: avoid R_AARCH64_ABS64 relocations for Image header fields · 6ad1fe5d
      Ard Biesheuvel authored
      Unfortunately, the current way of using the linker to emit build time
      constants into the Image header will no longer work once we switch to
      the use of PIE executables. The reason is that such constants are emitted
      into the binary using R_AARCH64_ABS64 relocations, which are resolved at
      runtime, not at build time, and the places targeted by those relocations
      will contain zeroes before that.
      So refactor the endian swapping linker script constant generation code so
      that it emits the upper and lower 32-bit words separately.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
  8. 18 Feb, 2016 2 commits
    • Ard Biesheuvel's avatar
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel authored
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    • Ard Biesheuvel's avatar
      arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region · ab893fb9
      Ard Biesheuvel authored
      This introduces the preprocessor symbol KIMAGE_VADDR which will serve as
      the symbolic virtual base of the kernel region, i.e., the kernel's virtual
      offset will be KIMAGE_VADDR + TEXT_OFFSET. For now, we define it as being
      equal to PAGE_OFFSET, but in the future, it will be moved below it once
      we move the kernel virtual mapping out of the linear mapping.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
  9. 16 Feb, 2016 1 commit
    • Mark Rutland's avatar
      arm64: mm: place empty_zero_page in bss · 5227cfa7
      Mark Rutland authored
      Currently the zero page is set up in paging_init, and thus we cannot use
      the zero page earlier. We use the zero page as a reserved TTBR value
      from which no TLB entries may be allocated (e.g. when uninstalling the
      idmap). To enable such usage earlier (as may be required for invasive
      changes to the kernel page tables), and to minimise the time that the
      idmap is active, we need to be able to use the zero page before
      This patch follows the example set by x86, by allocating the zero page
      at compile time, in .bss. This means that the zero page itself is
      available immediately upon entry to start_kernel (as we zero .bss before
      this), and also means that the zero page takes up no space in the raw
      Image binary. The associated struct page is allocated in bootmem_init,
      and remains unavailable until this time.
      Outside of arch code, the only users of empty_zero_page assume that the
      empty_zero_page symbol refers to the zeroed memory itself, and that
      ZERO_PAGE(x) must be used to acquire the associated struct page,
      following the example of x86. This patch also brings arm64 inline with
      these assumptions.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
  10. 25 Jan, 2016 1 commit
  11. 06 Jan, 2016 1 commit
    • Mark Rutland's avatar
      arm64: head.S: use memset to clear BSS · 2a803c4d
      Mark Rutland authored
      Currently we use an open-coded memzero to clear the BSS. As it is a
      trivial implementation, it is sub-optimal.
      Our optimised memset doesn't use the stack, is position-independent, and
      for the memzero case can use of DC ZVA to clear large blocks
      efficiently. In __mmap_switched the MMU is on and there are no live
      caller-saved registers, so we can safely call an uninstrumented memset.
      This patch changes __mmap_switched to use memset when clearing the BSS.
      We use the __pi_memset alias so as to avoid any instrumentation in all
      kernel configurations.
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
  12. 08 Dec, 2015 1 commit
  13. 19 Oct, 2015 3 commits
  14. 12 Oct, 2015 2 commits
    • Andrey Ryabinin's avatar
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin authored
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    • Ard Biesheuvel's avatar
      arm64/efi: isolate EFI stub from the kernel proper · e8f3010f
      Ard Biesheuvel authored
      Since arm64 does not use a builtin decompressor, the EFI stub is built
      into the kernel proper. So far, this has been working fine, but actually,
      since the stub is in fact a PE/COFF relocatable binary that is executed
      at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we
      should not be seamlessly sharing code with the kernel proper, which is a
      position dependent executable linked at a high virtual offset.
      So instead, separate the contents of libstub and its dependencies, by
      putting them into their own namespace by prefixing all of its symbols
      with __efistub. This way, we have tight control over what parts of the
      kernel proper are referenced by the stub.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarMatt Fleming <matt.fleming@intel.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
  15. 09 Oct, 2015 1 commit
  16. 15 Sep, 2015 1 commit
  17. 05 Aug, 2015 1 commit
    • Will Deacon's avatar
      arm64: mm: ensure patched kernel text is fetched from PoU · 8ec41987
      Will Deacon authored
      The arm64 booting document requires that the bootloader has cleaned the
      kernel image to the PoC. However, when a CPU re-enters the kernel due to
      either a CPU hotplug "on" event or resuming from a low-power state (e.g.
      cpuidle), the kernel text may in-fact be dirty at the PoU due to things
      like alternative patching or even module loading.
      Thanks to I-cache speculation with the MMU off, stale instructions could
      be fetched prior to enabling the MMU, potentially leading to crashes
      when executing regions of code that have been modified at runtime.
      This patch addresses the issue by ensuring that the local I-cache is
      invalidated immediately after a CPU has enabled its MMU but before
      jumping out of the identity mapping. Any stale instructions fetched from
      the PoC will then be discarded and refetched correctly from the PoU.
      Patching kernel text executed prior to the MMU being enabled is
      prohibited, so the early entry code will always be clean.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
  18. 27 Jul, 2015 1 commit
  19. 02 Jun, 2015 2 commits
    • Ard Biesheuvel's avatar
      arm64: reduce ID map to a single page · 5dfe9d7d
      Ard Biesheuvel authored
      Commit ea8c2e11
       ("arm64: Extend the idmap to the whole kernel
      image") changed the early page table code so that the entire kernel
      Image is covered by the identity map. This allows functions that
      need to enable or disable the MMU to reside anywhere in the kernel
      However, this change has the unfortunate side effect that the Image
      cannot cross a physical 512 MB alignment boundary anymore, since the
      early page table code cannot deal with the Image crossing a /virtual/
      512 MB alignment boundary.
      So instead, reduce the ID map to a single page, that is populated by
      the contents of the .idmap.text section. Only three functions reside
      there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset().
      If new code is introduced that needs to manipulate the MMU state, it
      should be added to this section as well.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    • Ard Biesheuvel's avatar
      arm64: use fixmap region for permanent FDT mapping · 61bd93ce
      Ard Biesheuvel authored
      Currently, the FDT blob needs to be in the same 512 MB region as
      the kernel, so that it can be mapped into the kernel virtual memory
      space very early on using a minimal set of statically allocated
      translation tables.
      Now that we have early fixmap support, we can relax this restriction,
      by moving the permanent FDT mapping to the fixmap region instead.
      This way, the FDT blob may be anywhere in memory.
      This also moves the vetting of the FDT to mmu.c, since the early
      init code in head.S does not handle mapping of the FDT anymore.
      At the same time, fix up some comments in head.S that have gone stale.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
  20. 24 Mar, 2015 2 commits
    • Mark Rutland's avatar
      arm64: head.S: ensure idmap_t0sz is visible · 0c20856c
      Mark Rutland authored
      We write idmap_t0sz with SCTLR_EL1.{C,M} clear, but we only have the
      guarnatee that the kernel Image is clean, not invalid in the caches, and
      therefore we might read a stale value once the MMU is enabled.
      This patch ensures we invalidate the corresponding cacheline after the
      write as we do for all other data written before we set SCTLR_EL1.{C.M},
      guaranteeing that the value will be visible later. We rely on the DSBs
      in __create_page_tables to complete the maintenance.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      CC: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
    • Mark Rutland's avatar
      arm64: head.S: ensure visibility of page tables · 91d57155
      Mark Rutland authored
      After writing the page tables, we use __inval_cache_range to invalidate
      any stale cache entries. Strongly Ordered memory accesses are not
      ordered w.r.t. cache maintenance instructions, and hence explicit memory
      barriers are required to provide this ordering. However,
      __inval_cache_range was written to be used on Normal Cacheable memory
      once the MMU and caches are on, and does not have any barriers prior to
      the DC instructions.
      This patch adds a DMB between the page tables being written and the
      corresponding cachelines being invalidated, ensuring that the
      invalidation makes the new data visible to subsequent cacheable
      accesses. A barrier is not required before the prior invalidate as we do
      not access the page table memory area prior to this, and earlier
      barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures
      ordering w.r.t. any stores performed prior to entering Linux.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Fixes: c218bca7
       ("arm64: Relax the kernel cache requirements for boot")
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
  21. 23 Mar, 2015 1 commit
  22. 19 Mar, 2015 6 commits