All new accounts created on Gitlab now require administrator approval. If you invite any collaborators, please let Flux staff know so they can approve the accounts.

  1. 03 Jun, 2016 1 commit
    • Mark Rutland's avatar
      arm64: mm: dump: log span level · 48dd73c5
      Mark Rutland authored
      The page table dump code logs spans of entries at the same level
      (pgd/pud/pmd/pte) which have the same attributes. While we log the
      (decoded) attributes, we don't log the level, which leaves the output
      ambiguous and/or confusing in some cases.
      
      For example:
      
      0xffff800800000000-0xffff800980000000           6G       RW NX SHD AF        BLK UXN MEM/NORMAL
      
      If using 4K pages, this may describe a span of 6 1G block entries at the
      PGD/PUD level, or 3072 2M block entries at the PMD level.
      
      This patch adds the page table level to each output line, removing this
      ambiguity. For the example above, this will produce:
      
      0xffffffc800000000-0xffffffc980000000           6G PUD       RW NX SHD AF        BLK UXN MEM/NORMAL
      
      When 3 level tables are in use, and we use the asm-generic/nopud.h
      definitions, the dump code treats each entry in the PGD as a 1 element
      table at the PUD level, and logs spans as being PUDs, which can be
      confusing. To counteract this, the "PUD" mnemonic is replaced with "PGD"
      when CONFIG_PGTABLE_LEVELS <= 3. Likewise for "PMD" when
      CONFIG_PGTABLE_LEVELS <= 2.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Huang Shijie <shijie.huang@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Steve Capper <steve.capper@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      48dd73c5
  2. 25 Apr, 2016 2 commits
  3. 14 Apr, 2016 1 commit
  4. 26 Feb, 2016 1 commit
  5. 18 Feb, 2016 1 commit
    • Ard Biesheuvel's avatar
      arm64: move kernel image to base of vmalloc area · f9040773
      Ard Biesheuvel authored
      This moves the module area to right before the vmalloc area, and moves
      the kernel image to the base of the vmalloc area. This is an intermediate
      step towards implementing KASLR, which allows the kernel image to be
      located anywhere in the vmalloc area.
      
      Since other subsystems such as hibernate may still need to refer to the
      kernel text or data segments via their linears addresses, both are mapped
      in the linear region as well. The linear alias of the text region is
      mapped read-only/non-executable to prevent inadvertent modification or
      execution.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f9040773
  6. 16 Feb, 2016 1 commit
  7. 25 Jan, 2016 1 commit
  8. 08 Oct, 2015 1 commit
  9. 05 May, 2015 1 commit
  10. 28 Jan, 2015 1 commit
    • Mark Rutland's avatar
      arm64: mm: use *_sect to check for section maps · a1c76574
      Mark Rutland authored
      The {pgd,pud,pmd}_bad family of macros have slightly fuzzy
      cross-architecture semantics, and seem to imply a populated entry that
      is not a next-level table, rather than a particular type of entry (e.g.
      a section map).
      
      In arm64 code, for those cases where we care about whether an entry is a
      section mapping, we can instead use the {pud,pmd}_sect macros to
      explicitly check for this case. This helps to document precisely what we
      care about, making the code easier to read, and allows for future
      relaxation of the *_bad macros to check for other "bad" entries.
      
      To that end this patch updates the table dumping and initial table setup
      to check for section mappings with {pud,pmd}_sect, and adds/restores
      BUG_ON(*_bad((*p)) checks after we've handled the *_sect and *_none
      cases so as to catch remaining "bad" cases.
      
      In the fault handling code, show_pte is left with *_bad checks as it
      only cares about whether it can walk the next level table, and this path
      is used for both kernel and userspace fault handling. The former case
      will be followed by a die() where we'll report the address that
      triggered the fault, which can be useful context for debugging.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarSteve Capper <steve.capper@linaro.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a1c76574
  11. 23 Jan, 2015 3 commits
    • Mark Rutland's avatar
      arm64: mm: dump: add missing includes · 764011ca
      Mark Rutland authored
      The arm64 dump code is currently relying on some definitions which are
      pulled in via transitive dependencies. It seems we have implicit
      dependencies on the following definitions:
      
      * MODULES_VADDR         (asm/memory.h)
      * MODULES_END           (asm/memory.h)
      * PAGE_OFFSET           (asm/memory.h)
      * PTE_*                 (asm/pgtable-hwdef.h)
      * ENOMEM                (linux/errno.h)
      * device_initcall       (linux/init.h)
      
      This patch ensures we explicitly include the relevant headers for the
      above items, fixing the observed build issue and hopefully preventing
      future issues as headers are refactored.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reported-by: default avatarMark Brown <broonie@kernel.org>
      Acked-by: default avatarSteve Capper <steve.capper@linaro.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      764011ca
    • Mark Rutland's avatar
      arm64: Fix overlapping VA allocations · aa03c428
      Mark Rutland authored
      PCI IO space was intended to be 16MiB, at 32MiB below MODULES_VADDR, but
      commit d1e6dc91 ("arm64: Add architectural support for PCI")
      extended this to cover the full 32MiB. The final 8KiB of this 32MiB is
      also allocated for the fixmap, allowing for potential clashes between
      the two.
      
      This change was masked by assumptions in mem_init and the page table
      dumping code, which assumed the I/O space to be 16MiB long through
      seaparte hard-coded definitions.
      
      This patch changes the definition of the PCI I/O space allocation to
      live in asm/memory.h, along with the other VA space allocations. As the
      fixmap allocation depends on the number of fixmap entries, this is moved
      below the PCI I/O space allocation. Both the fixmap and PCI I/O space
      are guarded with 2MB of padding. Sites assuming the I/O space was 16MiB
      are moved over use new PCI_IO_{START,END} definitions, which will keep
      in sync with the size of the IO space (now restored to 16MiB).
      
      As a useful side effect, the use of the new PCI_IO_{START,END}
      definitions prevents a build issue in the dumping code due to a (now
      redundant) missing include of io.h for PCI_IOBASE.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Liviu Dudau <liviu.dudau@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: reorder FIXADDR and PCI_IO address_markers_idx enum]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      aa03c428
    • Mark Brown's avatar
      arm64: dump: Fix implicit inclusion of definition for PCI_IOBASE · 284be285
      Mark Brown authored
      Since c9465b4e (arm64: add support to dump the kernel page tables)
      allmodconfig has failed to build on arm64 as a result of:
      
      ../arch/arm64/mm/dump.c:55:20: error: 'PCI_IOBASE' undeclared here (not in a function)
      
      Fix this by explicitly including io.h to ensure that a definition is
      present.
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      284be285
  12. 11 Dec, 2014 2 commits
    • Mark Rutland's avatar
      arm64: mm: dump: don't skip final region · fb59d007
      Mark Rutland authored
      If the final page table entry we walk is a valid mapping, the page table
      dumping code will not log the region this entry is part of, as the final
      note_page call in ptdump_show will trigger an early return. Luckily this
      isn't seen on contemporary systems as they typically don't have enough
      RAM to extend the linear mapping right to the end of the address space.
      
      In note_page, we log a region  when we reach its end (i.e. we hit an
      entry immediately afterwards which has different prot bits or is
      invalid). The final entry has no subsequent entry, so we will not log
      this immediately. We try to cater for this with a subsequent call to
      note_page in ptdump_show, but this returns early as 0 < LOWEST_ADDR, and
      hence we will skip a valid mapping if it spans to the final entry we
      note.
      
      Unlike 32-bit ARM, the pgd with the kernel mapping is never shared with
      user mappings, so we do not need the check to ensure we don't log user
      page tables. Due to the way addr is constructed in the walk_* functions,
      it can never be less than LOWEST_ADDR when walking the page tables, so
      it is not necessary to avoid dereferencing invalid table addresses. The
      existing checks for st->current_prot and st->marker[1].start_address are
      sufficient to ensure we will not print and/or dereference garbage when
      trying to log information.
      
      This patch removes the unnecessary check against LOWEST_ADDR, ensuring
      we log all regions in the kernel page table, including those which span
      right to the end of the address space.
      
      Cc: Kees Cook <keescook@chromium.org>
      Acked-by: default avatarLaura Abbott <lauraa@codeaurora.org>
      Acked-by: default avatarSteve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      fb59d007
    • Mark Rutland's avatar
      arm64: mm: dump: fix shift warning · 35545f0c
      Mark Rutland authored
      When building with 48-bit VAs, it's possible to get the following
      warning when building the arm64 page table dumping code:
      
      arch/arm64/mm/dump.c: In function ‘walk_pgd’:
      arch/arm64/mm/dump.c:266:2: warning: right shift count >= width of type
        pgd_t *pgd = pgd_offset(mm, 0);
        ^
      
      As pgd_offset is a macro and the second argument is not cast to any
      particular type, the zero will be given integer type by the compiler.
      As pgd_offset passes the pargument to pgd_index, we then try to shift
      the 32-bit integer by at least 39 bits (for 4k pages).
      
      Elsewhere the pgd_offset is passed a second argument of unsigned long
      type, so let's do the same here by passing '0UL' rather than '0'.
      
      Cc: Kees Cook <keescook@chromium.org>
      Acked-by: default avatarLaura Abbott <lauraa@codeaurora.org>
      Acked-by: default avatarSteve Capper <steve.capper@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      35545f0c
  13. 26 Nov, 2014 1 commit
  14. 24 Jul, 2014 1 commit
    • Steven Capper's avatar
      ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE · ded94779
      Steven Capper authored
      For LPAE, we have the following means for encoding writable or dirty
      ptes:
                                    L_PTE_DIRTY       L_PTE_RDONLY
          !pte_dirty && !pte_write        0               1
          !pte_dirty && pte_write         0               1
          pte_dirty && !pte_write         1               1
          pte_dirty && pte_write          1               0
      
      So we can't distinguish between writeable clean ptes and read only
      ptes. This can cause problems with ptes being incorrectly flagged as
      read only when they are writeable but not dirty.
      
      This patch renumbers L_PTE_RDONLY from AP[2] to a software bit #58,
      and adds additional logic to set AP[2] whenever the pte is read only
      or not dirty. That way we can distinguish between clean writeable ptes
      and read only ptes.
      
      HugeTLB pages will use this new logic automatically.
      
      We need to add some logic to Transparent HugePages to ensure that they
      correctly interpret the revised pgprot permissions (L_PTE_RDONLY has
      moved and no longer matches PMD_SECT_AP2). In the process of revising
      THP, the names of the PMD software bits have been prefixed with L_ to
      make them easier to distinguish from their hardware bit counterparts.
      Signed-off-by: default avatarSteve Capper <steve.capper@linaro.org>
      Reviewed-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      ded94779
  15. 07 Apr, 2014 1 commit
  16. 18 Feb, 2014 1 commit
  17. 11 Dec, 2013 1 commit