1. 23 Jul, 2014 5 commits
  2. 10 Jul, 2014 4 commits
    • Mark Rutland's avatar
      arm64: Enable TEXT_OFFSET fuzzing · da57a369
      Mark Rutland authored
      
      
      The arm64 Image header contains a text_offset field which bootloaders
      are supposed to read to determine the offset (from a 2MB aligned "start
      of memory" per booting.txt) at which to load the kernel. The offset is
      not well respected by bootloaders at present, and due to the lack of
      variation there is little incentive to support it. This is unfortunate
      for the sake of future kernels where we may wish to vary the text offset
      (even zeroing it).
      
      This patch adds options to arm64 to enable fuzz-testing of text_offset.
      CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
      16-byte aligned value value in the range [0..2MB) upon a build of the
      kernel. It is recommended that distribution kernels enable randomization
      to test bootloaders such that any compliance issues can be fixed early.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarTom Rini <trini@ti.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      da57a369
    • Mark Rutland's avatar
      arm64: Update the Image header · a2c1d73b
      Mark Rutland authored
      
      
      Currently the kernel Image is stripped of everything past the initial
      stack, and at runtime the memory is initialised and used by the kernel.
      This makes the effective minimum memory footprint of the kernel larger
      than the size of the loaded binary, though bootloaders have no mechanism
      to identify how large this minimum memory footprint is. This makes it
      difficult to choose safe locations to place both the kernel and other
      binaries required at boot (DTB, initrd, etc), such that the kernel won't
      clobber said binaries or other reserved memory during initialisation.
      
      Additionally when big endian support was added the image load offset was
      overlooked, and is currently of an arbitrary endianness, which makes it
      difficult for bootloaders to make use of it. It seems that bootloaders
      aren't respecting the image load offset at present anyway, and are
      assuming that offset 0x80000 will always be correct.
      
      This patch adds an effective image size to the kernel header which
      describes the amount of memory from the start of the kernel Image binary
      which the kernel expects to use before detecting memory and handling any
      memory reservations. This can be used by bootloaders to choose suitable
      locations to load the kernel and/or other binaries such that the kernel
      will not clobber any memory unexpectedly. As before, memory reservations
      are required to prevent the kernel from clobbering these locations
      later.
      
      Both the image load offset and the effective image size are forced to be
      little-endian regardless of the native endianness of the kernel to
      enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
      which wish to make use of the load offset can inspect the effective
      image size field for a non-zero value to determine if the offset is of a
      known endianness. To enable software to determine the endinanness of the
      kernel as may be required for certain use-cases, a new flags field (also
      little-endian) is added to the kernel header to export this information.
      
      The documentation is updated to clarify these details. To discourage
      future assumptions regarding the value of text_offset, the value at this
      point in time is removed from the main flow of the documentation (though
      kept as a compatibility note). Some minor formatting issues in the
      documentation are also corrected.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarTom Rini <trini@ti.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Kevin Hilman <kevin.hilman@linaro.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a2c1d73b
    • Mark Rutland's avatar
      arm64: place initial page tables above the kernel · bd00cd5f
      Mark Rutland authored
      
      
      Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
      image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
      bootloaders may use portions of this memory below the kernel and we do
      not parse the memory reservation list until after the MMU has been
      enabled. As such we may clobber some memory a bootloader wishes to have
      preserved.
      
      To enable the use of all of this memory by bootloaders (when the
      required memory reservations are communicated to the kernel) it is
      necessary to move our initial page tables elsewhere. As we currently
      have an effectively unbound requirement for memory at the end of the
      kernel image for .bss, we can place the page tables here.
      
      This patch moves the initial page table to the end of the kernel image,
      after the BSS. As they do not consist of any initialised data they will
      be stripped from the kernel Image as with the BSS. The BSS clearing
      routine is updated to stop at __bss_stop rather than _end so as to not
      clobber the page tables, and memory reservations made redundant by the
      new organisation are removed.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <lauraa@codeaurora.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      bd00cd5f
    • Mark Rutland's avatar
      arm64: head.S: remove unnecessary function alignment · 909a4069
      Mark Rutland authored
      
      
      Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't
      span any page boundary, which simplifies the idmap and spares us
      requiring an additional page table to map half of the function. In
      keeping with other important requirements in architecture code, this
      fact is undocumented.
      
      Additionally, as the function consists of three instructions totalling
      12 bytes with no literal pool data, a smaller alignment of 16 bytes
      would be sufficient.
      
      This patch reduces the alignment to 16 bytes and documents the
      underlying reason for the alignment. This reduces the required alignment
      of the entire .head.text section from 64 bytes to 16 bytes, though it
      may still be aligned to a larger value depending on TEXT_OFFSET.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <lauraa@codeaurora.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      909a4069
  3. 04 Jul, 2014 1 commit
  4. 09 May, 2014 1 commit
    • Will Deacon's avatar
      arm64: head: fix cache flushing and barriers in set_cpu_boot_mode_flag · d0488597
      Will Deacon authored
      
      
      set_cpu_boot_mode_flag is used to identify which exception levels are
      encountered across the system by CPUs trying to enter the kernel. The
      basic algorithm is: if a CPU is booting at EL2, it will set a flag at
      an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable.
      Otherwise, a flag is set at an offset of zero into the same cacheline.
      This enables us to check that all CPUs booted at the same exception
      level.
      
      This cacheline is written with the stage-1 MMU off (that is, via a
      strongly-ordered mapping) and will bypass any clean lines in the cache,
      leading to potential coherence problems when the variable is later
      checked via the normal, cacheable mapping of the kernel image.
      
      This patch reworks the broken flushing code so that we:
      
        (1) Use a DMB to order the strongly-ordered write of the cacheline
            against the subsequent cache-maintenance operation (by-VA
            operations only hazard against normal, cacheable accesses).
      
        (2) Use a single dc ivac instruction to invalidate any clean lines
            containing a stale copy of the line after it has been updated.
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d0488597
  5. 30 Apr, 2014 1 commit
  6. 07 Apr, 2014 1 commit
  7. 05 Apr, 2014 1 commit
    • Catalin Marinas's avatar
      arm64: Relax the kernel cache requirements for boot · c218bca7
      Catalin Marinas authored
      
      
      With system caches for the host OS or architected caches for guest OS we
      cannot easily guarantee that there are no dirty or stale cache lines for
      the areas of memory written by the kernel during boot with the MMU off
      (therefore non-cacheable accesses).
      
      This patch adds the necessary cache maintenance during boot and relaxes
      the booting requirements.
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c218bca7
  8. 26 Feb, 2014 1 commit
    • Catalin Marinas's avatar
      arm64: Extend the idmap to the whole kernel image · ea8c2e11
      Catalin Marinas authored
      
      
      This patch changes the idmap page table creation during boot to cover
      the whole kernel image, allowing functions like cpu_reset() to be safely
      called with the physical address.
      
      This patch also simplifies the create_block_map asm macro to no longer
      take an idmap argument and always use the phys/virt/end parameters. For
      the idmap case, phys == virt.
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ea8c2e11
  9. 20 Dec, 2013 1 commit
  10. 06 Dec, 2013 1 commit
  11. 25 Oct, 2013 3 commits
    • Matthew Leach's avatar
      arm64: big-endian: set correct endianess on kernel entry · 9cf71728
      Matthew Leach authored
      
      
      The endianness of memory accesses at EL2 and EL1 are configured by
      SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted,
      the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must
      ensure that they are set before performing any memory accesses.
      
      This patch ensures that SCTLR_EL{2,1} are configured appropriately at
      boot for kernels of either endianness.
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMatthew Leach <matthew.leach@arm.com>
      [catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      9cf71728
    • Matthew Leach's avatar
      arm64: head: create a new function for setting the boot_cpu_mode flag · 828e9834
      Matthew Leach authored
      
      
      Currently, the code for setting the __cpu_boot_mode flag is munged in
      with el2_setup. This makes things difficult on a BE bringup as a
      memory access has to have occurred before el2_setup which is the place
      that we'd like to set the endianess on the current EL.
      
      Create a new function for setting __cpu_boot_mode and have el2_setup
      return the mode the CPU. Also define a new constant in virt.h,
      BOOT_CPU_MODE_EL1, for readability.
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMatthew Leach <matthew.leach@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      828e9834
    • Mark Rutland's avatar
      arm64: factor out spin-table boot method · 652af899
      Mark Rutland authored
      
      
      The arm64 kernel has an internal holding pen, which is necessary for
      some systems where we can't bring CPUs online individually and must hold
      multiple CPUs in a safe area until the kernel is able to handle them.
      The current SMP infrastructure for arm64 is closely coupled to this
      holding pen, and alternative boot methods must launch CPUs into the pen,
      where they sit before they are launched into the kernel proper.
      
      With PSCI (and possibly other future boot methods), we can bring CPUs
      online individually, and need not perform the secondary_holding_pen
      dance. Instead, this patch factors the holding pen management code out
      to the spin-table boot method code, as it is the only boot method
      requiring the pen.
      
      A new entry point for secondaries, secondary_entry is added for other
      boot methods to use, which bypasses the holding pen and its associated
      overhead when bringing CPUs online. The smp.pen.text section is also
      removed, as the pen can live in head.text without problem.
      
      The cpu_operations structure is extended with two new functions,
      cpu_boot and cpu_postboot, for bringing a cpu into the kernel and
      performing any post-boot cleanup required by a bootmethod (e.g.
      resetting the secondary_holding_pen_release to INVALID_HWID).
      Documentation is added for cpu_operations.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      652af899
  12. 22 Aug, 2013 1 commit
    • Roy Franz's avatar
      arm64: Expand arm64 image header · 4370eec0
      Roy Franz authored
      
      
      Expand the arm64 image header to allow for co-existance with
      PE/COFF header required by the EFI stub.  The PE/COFF format
      requires the "MZ" header to be at offset 0, and the offset
      to the PE/COFF header to be at offset 0x3c.  The image
      header is expanded to allow 2 instructions at the beginning
      to accommodate a benign intruction at offset 0 that includes
      the "MZ" header, a magic number, and the offset to the PE/COFF
      header.
      Signed-off-by: default avatarRoy Franz <roy.franz@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      4370eec0
  13. 20 Mar, 2013 1 commit
  14. 22 Jan, 2013 1 commit
    • Catalin Marinas's avatar
      arm64: Add simple earlyprintk support · 2475ff9d
      Catalin Marinas authored
      
      
      This patch adds support for "earlyprintk=" parameter on the kernel
      command line. The format is:
      
        earlyprintk=<name>[,<addr>][,<options>]
      
      where <name> is the name of the (UART) device, e.g. "pl011", <addr> is
      the I/O address. The <options> aren't currently used.
      
      The mapping of the earlyprintk device is done very early during kernel
      boot and there are restrictions on which functions it can call. A
      special early_io_map() function is added which creates the mapping from
      the pre-defined EARLY_IOBASE to the device I/O address passed via the
      kernel parameter. The pgd entry corresponding to EARLY_IOBASE is
      pre-populated in head.S during kernel boot.
      
      Only PL011 is currently supported and it is assumed that the interface
      is already initialised by the boot loader before the kernel is started.
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      2475ff9d
  15. 05 Dec, 2012 4 commits
  16. 17 Sep, 2012 1 commit