1. 19 Oct, 2015 2 commits
  2. 13 Oct, 2015 1 commit
  3. 12 Oct, 2015 4 commits
    • Linus Walleij's avatar
      ARM64: kasan: print memory assignment · ee7f881b
      Linus Walleij authored
      
      
      This prints out the virtual memory assigned to KASan in the
      boot crawl along with other memory assignments, if and only
      if KASan is activated.
      
      Example dmesg from the Juno Development board:
      
      Memory: 1691156K/2080768K available (5465K kernel code, 444K rwdata,
      2160K rodata, 340K init, 217K bss, 373228K reserved, 16384K cma-reserved)
      Virtual kernel memory layout:
          kasan   : 0xffffff8000000000 - 0xffffff9000000000   (    64 GB)
          vmalloc : 0xffffff9000000000 - 0xffffffbdbfff0000   (   182 GB)
          vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
                    0xffffffbdc2000000 - 0xffffffbdc3fc0000   (    31 MB actual)
          fixed   : 0xffffffbffabfd000 - 0xffffffbffac00000   (    12 KB)
          PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000   (    16 MB)
          modules : 0xffffffbffc000000 - 0xffffffc000000000   (    64 MB)
          memory  : 0xffffffc000000000 - 0xffffffc07f000000   (  2032 MB)
            .init : 0xffffffc0007f5000 - 0xffffffc00084a000   (   340 KB)
            .text : 0xffffffc000080000 - 0xffffffc0007f45b4   (  7634 KB)
            .data : 0xffffffc000850000 - 0xffffffc0008bf200   (   445 KB)
      Signed-off-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ee7f881b
    • Andrey Ryabinin's avatar
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin authored
      
      
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
    • Andrey Ryabinin's avatar
      arm64: move PGD_SIZE definition to pgalloc.h · fd2203dd
      Andrey Ryabinin authored
      
      
      This will be used by KASAN latter.
      Signed-off-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      fd2203dd
    • Ard Biesheuvel's avatar
      arm64: use ENDPIPROC() to annotate position independent assembler routines · 20791846
      Ard Biesheuvel authored
      
      
      For more control over which functions are called with the MMU off or
      with the UEFI 1:1 mapping active, annotate some assembler routines as
      position independent. This is done by introducing ENDPIPROC(), which
      replaces the ENDPROC() declaration of those routines.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      20791846
  4. 08 Oct, 2015 2 commits
  5. 07 Oct, 2015 5 commits
  6. 14 Sep, 2015 1 commit
  7. 20 Aug, 2015 1 commit
  8. 05 Aug, 2015 1 commit
    • Will Deacon's avatar
      arm64: mm: ensure patched kernel text is fetched from PoU · 8ec41987
      Will Deacon authored
      
      
      The arm64 booting document requires that the bootloader has cleaned the
      kernel image to the PoC. However, when a CPU re-enters the kernel due to
      either a CPU hotplug "on" event or resuming from a low-power state (e.g.
      cpuidle), the kernel text may in-fact be dirty at the PoU due to things
      like alternative patching or even module loading.
      
      Thanks to I-cache speculation with the MMU off, stale instructions could
      be fetched prior to enabling the MMU, potentially leading to crashes
      when executing regions of code that have been modified at runtime.
      
      This patch addresses the issue by ensuring that the local I-cache is
      invalidated immediately after a CPU has enabled its MMU but before
      jumping out of the identity mapping. Any stale instructions fetched from
      the PoC will then be discarded and refetched correctly from the PoU.
      Patching kernel text executed prior to the MMU being enabled is
      prohibited, so the early entry code will always be clean.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      8ec41987
  9. 03 Aug, 2015 1 commit
  10. 28 Jul, 2015 1 commit
    • Mark Rutland's avatar
      arm64: mm: mark create_mapping as __init · c53e0baa
      Mark Rutland authored
      
      
      Currently create_mapping is marked with __ref, apparently because it
      refers to early_alloc. However, create_mapping has no logic to prevent
      erroneous use of early_alloc after it has been freed, and is only ever
      called by __init functions anyway. Thus the __ref marker is misleading
      and unnecessary.
      
      Instead, this patch marks create_mapping as __init, resulting in
      warnings if it is used from a a non __init functions, and allowing its
      memory to be reclaimed.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      c53e0baa
  11. 27 Jul, 2015 10 commits
  12. 06 Jul, 2015 1 commit
  13. 03 Jul, 2015 1 commit
  14. 01 Jul, 2015 1 commit
  15. 30 Jun, 2015 1 commit
  16. 24 Jun, 2015 1 commit
    • Zhang Zhen's avatar
      mm/hugetlb: reduce arch dependent code about huge_pmd_unshare · e81f2d22
      Zhang Zhen authored
      
      
      Currently we have many duplicates in definitions of huge_pmd_unshare.  In
      all architectures this function just returns 0 when
      CONFIG_ARCH_WANT_HUGE_PMD_SHARE is N.
      
      This patch puts the default implementation in mm/hugetlb.c and lets these
      architectures use the common code.
      Signed-off-by: default avatarZhang Zhen <zhenzhang.zhang@huawei.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: James Yang <James.Yang@freescale.com>
      Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e81f2d22
  17. 19 Jun, 2015 2 commits
  18. 17 Jun, 2015 1 commit
    • Dave P Martin's avatar
      arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP · b9bcc919
      Dave P Martin authored
      
      
      The memmap freeing code in free_unused_memmap() computes the end of
      each memblock by adding the memblock size onto the base.  However,
      if SPARSEMEM is enabled then the value (start) used for the base
      may already have been rounded downwards to work out which memmap
      entries to free after the previous memblock.
      
      This may cause memmap entries that are in use to get freed.
      
      In general, you're not likely to hit this problem unless there
      are at least 2 memblocks and one of them is not aligned to a
      sparsemem section boundary.  Note that carve-outs can increase
      the number of memblocks by splitting the regions listed in the
      device tree.
      
      This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
      vmemmap code deals with freeing the unused regions of the memmap
      instead of requiring the arch code to do it.
      
      This patch gets the memblock base out of the memblock directly when
      computing the block end address to ensure the correct value is used.
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b9bcc919
  19. 15 Jun, 2015 1 commit
  20. 12 Jun, 2015 1 commit
  21. 05 Jun, 2015 1 commit