1. 19 May, 2015 1 commit
  2. 05 May, 2015 1 commit
  3. 29 Apr, 2015 1 commit
    • Dean Nelson's avatar
      arm64: add missing PAGE_ALIGN() to __dma_free() · 2cff98b9
      Dean Nelson authored
      
      
      __dma_alloc() does a PAGE_ALIGN() on the passed in size argument before
      doing anything else. __dma_free() does not. And because it doesn't, it is
      possible to leak memory should size not be an integer multiple of PAGE_SIZE.
      
      The solution is to add a PAGE_ALIGN() to __dma_free() like is done in
      __dma_alloc().
      
      Additionally, this patch removes a redundant PAGE_ALIGN() from
      __dma_alloc_coherent(), since __dma_alloc_coherent() can only be called
      from __dma_alloc(), which already does a PAGE_ALIGN() before the call.
      
      Cc: stable@vger.kernel.org
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarDean Nelson <dnelson@redhat.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      2cff98b9
  4. 27 Apr, 2015 1 commit
    • Marek Szyprowski's avatar
      arm64: dma-mapping: always clear allocated buffers · 6829e274
      Marek Szyprowski authored
      
      
      Buffers allocated by dma_alloc_coherent() are always zeroed on Alpha,
      ARM (32bit), MIPS, PowerPC, x86/x86_64 and probably other architectures.
      It turned out that some drivers rely on this 'feature'. Allocated buffer
      might be also exposed to userspace with dma_mmap() call, so clearing it
      is desired from security point of view to avoid exposing random memory
      to userspace. This patch unifies dma_alloc_coherent() behavior on ARM64
      architecture with other implementations by unconditionally zeroing
      allocated buffer.
      
      Cc: <stable@vger.kernel.org> # v3.14+
      Signed-off-by: default avatarMarek Szyprowski <m.szyprowski@samsung.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      6829e274
  5. 14 Apr, 2015 4 commits
  6. 23 Mar, 2015 1 commit
  7. 20 Mar, 2015 1 commit
  8. 19 Mar, 2015 2 commits
  9. 06 Mar, 2015 1 commit
    • Laura Abbott's avatar
      arm64: Don't use is_module_addr in setting page attributes · 8b5f5a07
      Laura Abbott authored
      
      
      The set_memory_* functions currently only support module
      addresses. The addresses are validated using is_module_addr.
      That function is special though and relies on internal state
      in the module subsystem to work properly. At the time of
      module initialization and calling set_memory_*, it's too early
      for is_module_addr to work properly so it always returns
      false. Rather than be subject to the whims of the module state,
      just bounds check against the module virtual address range.
      Signed-off-by: default avatarLaura Abbott <lauraa@codeaurora.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      8b5f5a07
  10. 27 Feb, 2015 1 commit
  11. 11 Feb, 2015 1 commit
    • Naoya Horiguchi's avatar
      mm/hugetlb: reduce arch dependent code around follow_huge_* · 61f77eda
      Naoya Horiguchi authored
      
      
      Currently we have many duplicates in definitions around
      follow_huge_addr(), follow_huge_pmd(), and follow_huge_pud(), so this
      patch tries to remove the m.  The basic idea is to put the default
      implementation for these functions in mm/hugetlb.c as weak symbols
      (regardless of CONFIG_ARCH_WANT_GENERAL_HUGETL B), and to implement
      arch-specific code only when the arch needs it.
      
      For follow_huge_addr(), only powerpc and ia64 have their own
      implementation, and in all other architectures this function just returns
      ERR_PTR(-EINVAL).  So this patch sets returning ERR_PTR(-EINVAL) as
      default.
      
      As for follow_huge_(pmd|pud)(), if (pmd|pud)_huge() is implemented to
      always return 0 in your architecture (like in ia64 or sparc,) it's never
      called (the callsite is optimized away) no matter how implemented it is.
      So in such architectures, we don't need arch-specific implementation.
      
      In some architecture (like mips, s390 and tile,) their current
      arch-specific follow_huge_(pmd|pud)() are effectively identical with the
      common code, so this patch lets these architecture use the common code.
      
      One exception is metag, where pmd_huge() could return non-zero but it
      expects follow_huge_pmd() to always return NULL.  This means that we need
      arch-specific implementation which returns NULL.  This behavior looks
      strange to me (because non-zero pmd_huge() implies that the architecture
      supports PMD-based hugepage, so follow_huge_pmd() can/should return some
      relevant value,) but that's beyond this cleanup patch, so let's keep it.
      
      Justification of non-trivial changes:
      - in s390, follow_huge_pmd() checks !MACHINE_HAS_HPAGE at first, and this
        patch removes the check. This is OK because we can assume MACHINE_HAS_HPAGE
        is true when follow_huge_pmd() can be called (note that pmd_huge() has
        the same check and always returns 0 for !MACHINE_HAS_HPAGE.)
      - in s390 and mips, we use HPAGE_MASK instead of PMD_MASK as done in common
        code. This patch forces these archs use PMD_MASK, but it's OK because
        they are identical in both archs.
        In s390, both of HPAGE_SHIFT and PMD_SHIFT are 20.
        In mips, HPAGE_SHIFT is defined as (PAGE_SHIFT + PAGE_SHIFT - 3) and
        PMD_SHIFT is define as (PAGE_SHIFT + PAGE_SHIFT + PTE_ORDER - 3), but
        PTE_ORDER is always 0, so these are identical.
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      61f77eda
  12. 29 Jan, 2015 1 commit
  13. 28 Jan, 2015 3 commits
    • Mark Rutland's avatar
      arm64: mm: use *_sect to check for section maps · a1c76574
      Mark Rutland authored
      
      
      The {pgd,pud,pmd}_bad family of macros have slightly fuzzy
      cross-architecture semantics, and seem to imply a populated entry that
      is not a next-level table, rather than a particular type of entry (e.g.
      a section map).
      
      In arm64 code, for those cases where we care about whether an entry is a
      section mapping, we can instead use the {pud,pmd}_sect macros to
      explicitly check for this case. This helps to document precisely what we
      care about, making the code easier to read, and allows for future
      relaxation of the *_bad macros to check for other "bad" entries.
      
      To that end this patch updates the table dumping and initial table setup
      to check for section mappings with {pud,pmd}_sect, and adds/restores
      BUG_ON(*_bad((*p)) checks after we've handled the *_sect and *_none
      cases so as to catch remaining "bad" cases.
      
      In the fault handling code, show_pte is left with *_bad checks as it
      only cares about whether it can walk the next level table, and this path
      is used for both kernel and userspace fault handling. The former case
      will be followed by a die() where we'll report the address that
      triggered the fault, which can be useful context for debugging.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarSteve Capper <steve.capper@linaro.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a1c76574
    • Mark Rutland's avatar
      arm64: drop unnecessary cache+tlb maintenance · a3bba370
      Mark Rutland authored
      
      
      In paging_init, we call flush_cache_all, but this is backed by Set/Way
      operations which may not achieve anything in the presence of cache line
      migration and/or system caches. If the caches are already in an
      inconsistent state at this point, there is nothing we can do (short of
      flushing the entire physical address space by VA) to empty architected
      and system caches. As such, flush_cache_all only serves to mask other
      potential bugs. Hence, this patch removes the boot-time call to
      flush_cache_all.
      
      Immediately after the cache maintenance we flush the TLBs, but this is
      also unnecessary. Before enabling the MMU, the TLBs are invalidated, and
      thus are initially clean. When changing the contents of active tables
      (e.g. in fixup_executable() for DEBUG_RODATA) we perform the required
      TLB maintenance following the update, and therefore no additional
      maintenance is required to ensure the new table entries are in effect.
      Since activating the MMU we will not have modified system register
      fields permitted to be cached in a TLB, and therefore do not need
      maintenance for any cached system register fields. Hence, the TLB flush
      is unnecessary.
      
      Shortly after the unnecessary TLB flush, we update TTBR0 to point to an
      empty zero page rather than the idmap, and flush the TLBs. This
      maintenance is necessary to remove the global idmap entries from the
      TLBs (as they would conflict with userspace mappings), and is retained.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: default avatarSteve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a3bba370
    • zhichang.yuan's avatar
      arm64:mm: free the useless initial page table · 523d6e9f
      zhichang.yuan authored
      
      
      For 64K page system, after mapping a PMD section, the corresponding initial
      page table is not needed any more. That page can be freed.
      Signed-off-by: default avatarZhichang Yuan <zhichang.yuan@linaro.org>
      [catalin.marinas@arm.com: added BUG_ON() to catch late memblock freeing]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      523d6e9f
  14. 27 Jan, 2015 1 commit
    • Lorenzo Pieralisi's avatar
      arm64: kernel: remove ARM64_CPU_SUSPEND config option · af3cfdbf
      Lorenzo Pieralisi authored
      
      
      ARM64_CPU_SUSPEND config option was introduced to make code providing
      context save/restore selectable only on platforms requiring power
      management capabilities.
      
      Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which
      in turn is set by the SUSPEND config option.
      
      The introduction of CPU_IDLE for arm64 requires that code configured
      by ARM64_CPU_SUSPEND (context save/restore) should be compiled in
      in order to enable the CPU idle driver to rely on CPU operations
      carrying out context save/restore.
      
      The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore
      forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP)
      failed dependencies, which is not a clean way of handling the kernel
      configuration option.
      
      For these reasons, this patch removes the ARM64_CPU_SUSPEND config option
      and makes the context save/restore dependent on CPU_PM, which is selected
      whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies
      in the process.
      
      This way, code previously configured through ARM64_CPU_SUSPEND is
      compiled in whenever a power management subsystem requires it to be
      present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour
      expected on ARM64 kernels.
      
      The cpu_suspend and cpu_init_idle CPU operations are added only if
      CPU_IDLE is selected, since they are CPU_IDLE specific methods and
      should be grouped and defined accordingly.
      
      PSCI CPU operations are updated to reflect the introduced changes.
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      af3cfdbf
  15. 23 Jan, 2015 6 commits
    • Catalin Marinas's avatar
      arm64: Combine coherent and non-coherent swiotlb dma_ops · 9d3bfbb4
      Catalin Marinas authored
      
      
      Since dev_archdata now has a dma_coherent state, combine the two
      coherent and non-coherent operations and remove their declaration,
      together with set_dma_ops, from the arch dma-mapping.h file.
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      9d3bfbb4
    • Suzuki K. Poulose's avatar
      arm64: Fix SCTLR_EL1 initialisation · 9f71ac96
      Suzuki K. Poulose authored
      
      
      We initialise the SCTLR_EL1 value by read-modify-writeback
      of the desired bits, leaving the other bits (including reserved
      bits(RESx)) untouched. However, sometimes the boot monitor could
      leave garbage values in the RESx bits which could have different
      implications. This patch makes sure that all the bits, including
      the RESx bits, are set to the proper state, except for the
      'endianness' control bits, EE(25) & E0E(24)- which are set early
      in the el2_setup.
      
      Updated the state of the Bit[6] in the comment to RES0 in the
      comment.
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki.poulose@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      9f71ac96
    • Min-Hua Chen's avatar
      arm64: add ioremap physical address information · da1f2b82
      Min-Hua Chen authored
      
      
      In /proc/vmallocinfo, it's good to show the physical address
      of each ioremap in vmallocinfo. Add physical address information
      in arm64 ioremap.
      
      0xffffc900047f2000-0xffffc900047f4000    8192 _nv013519rm+0x57/0xa0
      [nvidia] phys=f8100000 ioremap
      0xffffc900047f4000-0xffffc900047f6000    8192 _nv013519rm+0x57/0xa0
      [nvidia] phys=f8008000 ioremap
      0xffffc90004800000-0xffffc90004821000  135168 e1000_probe+0x22c/0xb95
      [e1000e] phys=f4300000 ioremap
      0xffffc900049c0000-0xffffc900049e1000  135168 _nv013521rm+0x4d/0xd0
      [nvidia] phys=e0140000 ioremap
      Signed-off-by: default avatarMin-Hua Chen <orca.chen@gmail.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      da1f2b82
    • Mark Rutland's avatar
      arm64: mm: dump: add missing includes · 764011ca
      Mark Rutland authored
      
      
      The arm64 dump code is currently relying on some definitions which are
      pulled in via transitive dependencies. It seems we have implicit
      dependencies on the following definitions:
      
      * MODULES_VADDR         (asm/memory.h)
      * MODULES_END           (asm/memory.h)
      * PAGE_OFFSET           (asm/memory.h)
      * PTE_*                 (asm/pgtable-hwdef.h)
      * ENOMEM                (linux/errno.h)
      * device_initcall       (linux/init.h)
      
      This patch ensures we explicitly include the relevant headers for the
      above items, fixing the observed build issue and hopefully preventing
      future issues as headers are refactored.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reported-by: default avatarMark Brown <broonie@kernel.org>
      Acked-by: default avatarSteve Capper <steve.capper@linaro.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      764011ca
    • Mark Rutland's avatar
      arm64: Fix overlapping VA allocations · aa03c428
      Mark Rutland authored
      PCI IO space was intended to be 16MiB, at 32MiB below MODULES_VADDR, but
      commit d1e6dc91
      
       ("arm64: Add architectural support for PCI")
      extended this to cover the full 32MiB. The final 8KiB of this 32MiB is
      also allocated for the fixmap, allowing for potential clashes between
      the two.
      
      This change was masked by assumptions in mem_init and the page table
      dumping code, which assumed the I/O space to be 16MiB long through
      seaparte hard-coded definitions.
      
      This patch changes the definition of the PCI I/O space allocation to
      live in asm/memory.h, along with the other VA space allocations. As the
      fixmap allocation depends on the number of fixmap entries, this is moved
      below the PCI I/O space allocation. Both the fixmap and PCI I/O space
      are guarded with 2MB of padding. Sites assuming the I/O space was 16MiB
      are moved over use new PCI_IO_{START,END} definitions, which will keep
      in sync with the size of the IO space (now restored to 16MiB).
      
      As a useful side effect, the use of the new PCI_IO_{START,END}
      definitions prevents a build issue in the dumping code due to a (now
      redundant) missing include of io.h for PCI_IOBASE.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Liviu Dudau <liviu.dudau@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: reorder FIXADDR and PCI_IO address_markers_idx enum]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      aa03c428
    • Mark Brown's avatar
      arm64: dump: Fix implicit inclusion of definition for PCI_IOBASE · 284be285
      Mark Brown authored
      Since c9465b4e
      
       (arm64: add support to dump the kernel page tables)
      allmodconfig has failed to build on arm64 as a result of:
      
      ../arch/arm64/mm/dump.c:55:20: error: 'PCI_IOBASE' undeclared here (not in a function)
      
      Fix this by explicitly including io.h to ensure that a definition is
      present.
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      284be285
  16. 22 Jan, 2015 2 commits
  17. 16 Jan, 2015 2 commits
  18. 15 Jan, 2015 1 commit
  19. 13 Jan, 2015 1 commit
    • Mark Rutland's avatar
      arm64: remove broken cachepolicy code · 26a945ca
      Mark Rutland authored
      
      
      The cachepolicy kernel parameter was intended to aid in the debugging of
      coherency issues, but it is fundamentally broken for several reasons:
      
       * On SMP platforms, only the boot CPU's tcr_el1 is altered. Secondary
         CPUs may therefore use differ w.r.t. the attributes they apply to
         MT_NORMAL memory, resulting in a loss of coherency.
      
       * The cache maintenance using flush_dcache_all (based on Set/Way
         operations) is not guaranteed to empty a given CPU's cache hierarchy
         while said CPU has caches enabled, it cannot empty the caches of
         other coherent PEs, nor is it guaranteed to flush data to the PoC
         even when caches are disabled.
      
       * The TLBs are not invalidated around the modification of MAIR_EL1 and
         TCR_EL1, as required by the architecture (as both are permitted to be
         cached in a TLB). This may result in CPUs using attributes other than
         those expected for some memory accesses, resulting in a loss of
         coherency.
      
       * Exclusive accesses are not architecturally guaranteed to function as
         expected on memory marked as Write-Through or Non-Cacheable. Thus
         changing the attributes of MT_NORMAL away from the (architecurally
         safe) defaults may cause uses of these instructions (e.g. atomics) to
         behave erratically.
      
      Given this, the cachepolicy code cannot be used for debugging purposes
      as it alone is likely to cause coherency issues. This patch removes the
      broken cachepolicy code.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      26a945ca
  20. 12 Jan, 2015 3 commits
  21. 11 Dec, 2014 2 commits
    • Mark Rutland's avatar
      arm64: mm: dump: don't skip final region · fb59d007
      Mark Rutland authored
      
      
      If the final page table entry we walk is a valid mapping, the page table
      dumping code will not log the region this entry is part of, as the final
      note_page call in ptdump_show will trigger an early return. Luckily this
      isn't seen on contemporary systems as they typically don't have enough
      RAM to extend the linear mapping right to the end of the address space.
      
      In note_page, we log a region  when we reach its end (i.e. we hit an
      entry immediately afterwards which has different prot bits or is
      invalid). The final entry has no subsequent entry, so we will not log
      this immediately. We try to cater for this with a subsequent call to
      note_page in ptdump_show, but this returns early as 0 < LOWEST_ADDR, and
      hence we will skip a valid mapping if it spans to the final entry we
      note.
      
      Unlike 32-bit ARM, the pgd with the kernel mapping is never shared with
      user mappings, so we do not need the check to ensure we don't log user
      page tables. Due to the way addr is constructed in the walk_* functions,
      it can never be less than LOWEST_ADDR when walking the page tables, so
      it is not necessary to avoid dereferencing invalid table addresses. The
      existing checks for st->current_prot and st->marker[1].start_address are
      sufficient to ensure we will not print and/or dereference garbage when
      trying to log information.
      
      This patch removes the unnecessary check against LOWEST_ADDR, ensuring
      we log all regions in the kernel page table, including those which span
      right to the end of the address space.
      
      Cc: Kees Cook <keescook@chromium.org>
      Acked-by: default avatarLaura Abbott <lauraa@codeaurora.org>
      Acked-by: default avatarSteve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      fb59d007
    • Mark Rutland's avatar
      arm64: mm: dump: fix shift warning · 35545f0c
      Mark Rutland authored
      
      
      When building with 48-bit VAs, it's possible to get the following
      warning when building the arm64 page table dumping code:
      
      arch/arm64/mm/dump.c: In function ‘walk_pgd’:
      arch/arm64/mm/dump.c:266:2: warning: right shift count >= width of type
        pgd_t *pgd = pgd_offset(mm, 0);
        ^
      
      As pgd_offset is a macro and the second argument is not cast to any
      particular type, the zero will be given integer type by the compiler.
      As pgd_offset passes the pargument to pgd_index, we then try to shift
      the 32-bit integer by at least 39 bits (for 4k pages).
      
      Elsewhere the pgd_offset is passed a second argument of unsigned long
      type, so let's do the same here by passing '0UL' rather than '0'.
      
      Cc: Kees Cook <keescook@chromium.org>
      Acked-by: default avatarLaura Abbott <lauraa@codeaurora.org>
      Acked-by: default avatarSteve Capper <steve.capper@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      35545f0c
  22. 05 Dec, 2014 1 commit
  23. 01 Dec, 2014 1 commit
  24. 26 Nov, 2014 1 commit