- 23 Jan, 2015 2 commits
-
-
Suzuki K. Poulose authored
This patch keeps track of the mixed endian EL0 support across the system and provides helper functions to export it. The status is a boolean indicating whether all the CPUs on the system supports mixed endian at EL0. Signed-off-by:
Suzuki K. Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Robin Murphy authored
Add the necessary call to of_iommu_init. Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 22 Jan, 2015 3 commits
-
-
Ard Biesheuvel authored
Now that the create_mapping() code in mm/mmu.c is able to support setting up kernel page tables at initcall time, we can move the whole virtmap creation to arm64_enable_runtime_services() instead of having a distinct stage during early boot. This also allows us to drop the arm64-specific EFI_VIRTMAP flag. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Laura Abbott authored
Add page protections for arm64 similar to those in arm. This is for security reasons to prevent certain classes of exploits. The current method: - Map all memory as either RWX or RW. We round to the nearest section to avoid creating page tables before everything is mapped - Once everything is mapped, if either end of the RWX section should not be X, we split the PMD and remap as necessary - When initmem is to be freed, we change the permissions back to RW (using stop machine if necessary to flush the TLB) - If CONFIG_DEBUG_RODATA is set, the read only sections are set read only. Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by:
Kees Cook <keescook@chromium.org> Tested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Laura Abbott <lauraa@codeaurora.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Laura Abbott authored
When kernel text is marked as read only, it cannot be modified directly. Use a fixmap to modify the text instead in a similar manner to x86 and arm. Reviewed-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Kees Cook <keescook@chromium.org> Tested-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Laura Abbott <lauraa@codeaurora.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 16 Jan, 2015 2 commits
-
-
Mark Rutland authored
When booting with EFI, we acquire the EFI memory map after parsing the early params. This unfortuantely renders the option useless as we call memblock_enforce_memory_limit (which uses memblock_remove_range behind the scenes) before we've added any memblocks. We end up removing nothing, then adding all of memory later when efi_init calls reserve_regions. Instead, we can log the limit and apply this later when we do the rest of the memblock work in memblock_init, which should work regardless of the presence of EFI. At the same time we may as well move the early parameter into arm64's mm/init.c, close to arm64_memblock_init. Any memory which must be mapped (e.g. for use by EFI runtime services) must be mapped explicitly reather than relying on the linear mapping, which may be truncated as a result of a mem= option passed on the kernel command line. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
When remapping the UEFI memory map using ioremap_cache(), we have to deal with potential failure. Note that, even if the common case is for ioremap_cache() to return the existing linear mapping of the memory map, we cannot rely on that to be always the case, e.g., in the presence of a mem= kernel parameter. At the same time, remove a stale comment and move the memmap code together. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Mark Salter <msalter@redhat.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 15 Jan, 2015 3 commits
-
-
Mark Rutland authored
To aid the developer when something triggers an unexpected exception, decode the ESR_ELx.EC field when logging an ESR_ELx value. This doesn't tell the developer the specifics of the exception encoded in the remaining IL and ISS bits, but it can be helpful to distinguish between exception classes (e.g. SError and a data abort) without having to manually decode the field, which can be tiresome. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Now that we have common ESR_ELx_* macros, move the core arm64 code over to them. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
Sudeep Holla authored
This patch adds support for cacheinfo on ARM64. On ARMv8, the cache hierarchy can be identified through Cache Level ID (CLIDR) register while the cache geometry is provided by Cache Size ID (CCSIDR) register. Since the architecture doesn't provide any way of detecting the cpus sharing particular cache, device tree is used for the same purpose. Signed-off-by:
Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 12 Jan, 2015 3 commits
-
-
Ard Biesheuvel authored
Now that we have moved the call to SetVirtualAddressMap() to the stub, UEFI has no use for the ID map, so we can drop the code that installs ID mappings for UEFI memory regions. Acked-by:
Leif Lindholm <leif.lindholm@linaro.org> Acked-by:
Will Deacon <will.deacon@arm.com> Tested-by:
Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org>
-
Ard Biesheuvel authored
Now that we are calling SetVirtualAddressMap() from the stub, there is no need to reserve boot-only memory regions, which implies that there is also no reason to free them again later. Acked-by:
Leif Lindholm <leif.lindholm@linaro.org> Acked-by:
Will Deacon <will.deacon@arm.com> Tested-by:
Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org>
-
Ard Biesheuvel authored
In order to support kexec, the kernel needs to be able to deal with the state of the UEFI firmware after SetVirtualAddressMap() has been called. To avoid having separate code paths for non-kexec and kexec, let's move the call to SetVirtualAddressMap() to the stub: this will guarantee us that it will only be called once (since the stub is not executed during kexec), and ensures that the UEFI state is identical between kexec and normal boot. This implies that the layout of the virtual mapping needs to be created by the stub as well. All regions are rounded up to a naturally aligned multiple of 64 KB (for compatibility with 64k pages kernels) and recorded in the UEFI memory map. The kernel proper reads those values and installs the mappings in a dedicated set of page tables that are swapped in during UEFI Runtime Services calls. Acked-by:
Leif Lindholm <leif.lindholm@linaro.org> Acked-by:
Matt Fleming <matt.fleming@intel.com> Tested-by:
Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org>
-
- 09 Jan, 2015 1 commit
-
-
Andy Lutomirski authored
On x86_64, at least, task_pt_regs may be only partially initialized in many contexts, so x86_64 should not use it without extra care from interrupt context, let alone NMI context. This will allow x86_64 to override the logic and will supply some scratch space to use to make a cleaner copy of user regs. Tested-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Andy Lutomirski <luto@amacapital.net> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: chenggang.qcg@taobao.com Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jean Pihet <jean.pihet@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mark Salter <msalter@redhat.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/e431cd4c18c2e1c44c774f10758527fb2d1025c4.1420396372.git.luto@amacapital.netSigned-off-by:
Ingo Molnar <mingo@kernel.org>
-
- 08 Jan, 2015 1 commit
-
-
Ard Biesheuvel authored
The early ioremap support introduced by patch bf4b558e ("arm64: add early_ioremap support") failed to add a call to early_ioremap_reset() at an appropriate time. Without this call, invocations of early_ioremap etc. that are done too late will go unnoticed and may cause corruption. This is exactly what happened when the first user of this feature was added in patch f84d0275 ("arm64: add EFI runtime services"). The early mapping of the EFI memory map is unmapped during an early initcall, at which time the early ioremap support is long gone. Fix by adding the missing call to early_ioremap_reset() to setup_arch(), and move the offending early_memunmap() to right after the point where the early mapping of the EFI memory map is last used. Fixes: f84d0275 ("arm64: add EFI runtime services") Cc: <stable@vger.kernel.org> Signed-off-by:
Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 07 Jan, 2015 3 commits
-
-
Paul Walmsley authored
On next-20150105, defconfig compilation breaks with: arch/arm64/kernel/smp_spin_table.c:80:2: error: implicit declaration of function ‘ioremap_cache’ [-Werror=implicit-function-declaration] arch/arm64/kernel/smp_spin_table.c:92:2: error: implicit declaration of function ‘writeq_relaxed’ [-Werror=implicit-function-declaration] arch/arm64/kernel/smp_spin_table.c:101:2: error: implicit declaration of function ‘iounmap’ [-Werror=implicit-function-declaration] Fix by including asm/io.h, which contains definitions or prototypes for these macros or functions. This second version incorporates a comment from Mark Rutland <mark.rutland@arm.com> to keep the includes in alphabetical order by filename. Signed-off-by:
Paul Walmsley <paul@pwsan.com> Cc: Paul Walmsley <pwalmsley@nvidia.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Paul Walmsley authored
On next-20150105, defconfig compilation breaks with: arch/arm64/kernel/module.c:408:4: error: implicit declaration of function ‘apply_alternatives’ [-Werror=implicit-function-declaration] Fix by including asm/alternative.h, where the apply_alternatives() prototype is declared. This second version incorporates a comment from Mark Rutland <mark.rutland@arm.com> to keep the includes in alphabetical order by filename. Signed-off-by:
Paul Walmsley <paul@pwsan.com> Cc: Paul Walmsley <pwalmsley@nvidia.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
We don't currently check a number of registers exposed to AArch32 guests (MVFR{0,1,2}_EL1 and ID_DFR0_EL1), despite the fact these describe AArch32 feature support exposed to userspace and KVM guests similarly to AArch64 registers which we do check. We do not expect these registers to vary across a set of CPUs. This patch adds said registers to the cpuinfo framework and sanity checks. No sanity check failures have been observed on a current ARMv8 big.LITTLE platform (Juno). Cc: Catalin Marinas <catalin.marinas@arm.com> Reported-by:
Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by:
Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 23 Dec, 2014 1 commit
-
-
Lorenzo Pieralisi authored
On arm64 the TTBR0_EL1 register is set to either the reserved TTBR0 page tables on boot or to the active_mm mappings belonging to user space processes, it must never be set to swapper_pg_dir page tables mappings. When a CPU is booted its active_mm is set to init_mm even though its TTBR0_EL1 points at the reserved TTBR0 page mappings. This implies that when __cpu_suspend is triggered the active_mm can point at init_mm even if the current TTBR0_EL1 register contains the reserved TTBR0_EL1 mappings. Therefore, the mm save and restore executed in __cpu_suspend might turn out to be erroneous in that, if the current->active_mm corresponds to init_mm, on resume from low power it ends up restoring in the TTBR0_EL1 the init_mm mappings that are global and can cause speculation of TLB entries which end up being propagated to user space. This patch fixes the issue by checking the active_mm pointer before restoring the TTBR0 mappings. If the current active_mm == &init_mm, the code sets the TTBR0_EL1 to the reserved TTBR0 mapping instead of switching back to the active_mm, which is the expected behaviour corresponding to the TTBR0_EL1 settings when __cpu_suspend was entered. Fixes: 95322526 ("arm64: kernel: cpu_{suspend/resume} implementation") Cc: <stable@vger.kernel.org> # 3.14+: 18ab7db6 Cc: <stable@vger.kernel.org> # 3.14+: 714f5992 Cc: <stable@vger.kernel.org> # 3.14+: c3684fbb Cc: <stable@vger.kernel.org> # 3.14+ Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 11 Dec, 2014 1 commit
-
-
Krzysztof Kozlowski authored
Fix build failure of defconfig when PM_SLEEP is disabled (e.g. by disabling SUSPEND) and CPU_IDLE enabled: arch/arm64/kernel/psci.c:543:2: error: unknown field ‘cpu_suspend’ specified in initializer .cpu_suspend = cpu_psci_cpu_suspend, ^ arch/arm64/kernel/psci.c:543:2: warning: initialization from incompatible pointer type [enabled by default] arch/arm64/kernel/psci.c:543:2: warning: (near initialization for ‘cpu_psci_ops.cpu_prepare’) [enabled by default] make[1]: *** [arch/arm64/kernel/psci.o] Error 1 The cpu_operations.cpu_suspend field exists only if ARM64_CPU_SUSPEND is defined, not CPU_IDLE. Signed-off-by:
Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 04 Dec, 2014 3 commits
-
-
Andre Przywara authored
Currently the kernel patches all necessary instructions once at boot time, so modules are not covered by this. Change the apply_alternatives() function to take a beginning and an end pointer and introduce a new variant (apply_alternatives_all()) to cover the existing use case for the static kernel image section. Add a module_finalize() function to arm64 to check for an alternatives section in a module and patch only the instructions from that specific area. Since that module code is not touched before the module initialization has ended, we don't need to halt the machine before doing the patching in the module's code. Signed-off-by:
Andre Przywara <andre.przywara@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Daniel Thompson authored
If the overflow threshold for a counter is set above or near the 0xffffffff boundary then the kernel may lose track of the overflow causing only events that occur *after* the overflow to be recorded. Specifically the problem occurs when the value of the performance counter overtakes its original programmed value due to wrap around. Typical solutions to this problem are either to avoid programming in values likely to be overtaken or to treat the overflow bit as the 33rd bit of the counter. Its somewhat fiddly to refactor the code to correctly handle the 33rd bit during irqsave sections (context switches for example) so instead we take the simpler approach of avoiding values likely to be overtaken. We set the limit to half of max_period because this matches the limit imposed in __hw_perf_event_init(). This causes a doubling of the interrupt rate for large threshold values, however even with a very fast counter ticking at 4GHz the interrupt rate would only be ~1Hz. Signed-off-by:
Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Fabio Estevam authored
Building arm64.allmodconfig leads to the following warning: usb/gadget/function/f_ncm.c:203:0: warning: "NCAPS" redefined #define NCAPS (USB_CDC_NCM_NCAP_ETH_FILTER | USB_CDC_NCM_NCAP_CRC_MODE) ^ In file included from /home/build/work/batch/arch/arm64/include/asm/io.h:32:0, from /home/build/work/batch/include/linux/clocksource.h:19, from /home/build/work/batch/include/clocksource/arm_arch_timer.h:19, from /home/build/work/batch/arch/arm64/include/asm/arch_timer.h:27, from /home/build/work/batch/arch/arm64/include/asm/timex.h:19, from /home/build/work/batch/include/linux/timex.h:65, from /home/build/work/batch/include/linux/sched.h:19, from /home/build/work/batch/arch/arm64/include/asm/compat.h:25, from /home/build/work/batch/arch/arm64/include/asm/stat.h:23, from /home/build/work/batch/include/linux/stat.h:5, from /home/build/work/batch/include/linux/module.h:10, from /home/build/work/batch/drivers/usb/gadget/function/f_ncm.c:19: arch/arm64/include/asm/cpufeature.h:27:0: note: this is the location of the previous definition #define NCAPS 2 So add a ARM64 prefix to avoid such problem. Reported-by:
Olof's autobuilder <build@lixom.net> Signed-off-by:
Fabio Estevam <fabio.estevam@freescale.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 01 Dec, 2014 1 commit
-
-
Vladimir Murzin authored
Update handling of cacheflush syscall with changes made in arch/arm counterpart: - return error to userspace when flushing syscall fails - split user cache-flushing into interruptible chunks - don't bother rounding to nearest vma Signed-off-by:
Vladimir Murzin <vladimir.murzin@arm.com> [will: changed internal return value from -EINTR to 0 to match arch/arm/] Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 28 Nov, 2014 4 commits
-
-
AKASHI Takahiro authored
secure_computing() is called first in syscall_trace_enter() so that a system call will be aborted quickly without doing succeeding syscall tracing if seccomp rules want to deny that system call. On compat task, syscall numbers for system calls allowed in seccomp mode 1 are different from those on normal tasks, and so _NR_seccomp_xxx_32's need to be redefined. Signed-off-by:
AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
AKASHI Takahiro authored
SIGSYS is primarily used in secure computing to notify tracer of syscall events. This patch allows signal handler on compat task to get correct information with SA_SIGINFO specified when this signal is delivered. Reviewed-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
AKASHI Takahiro authored
If tracer modifies a syscall number to -1, this traced system call should be skipped with a return value specified in x0. This patch implements this semantics. Please note: * syscall entry tracing and syscall exit tracing (ftrace tracepoint and audit) are always executed, if enabled, even when skipping a system call (that is, -1). In this way, we can avoid a potential bug where audit_syscall_entry() might be called without audit_syscall_exit() at the previous system call being called, that would cause OOPs in audit_syscall_entry(). Signed-off-by:
AKASHI Takahiro <takahiro.akashi@linaro.org> [will: fixed up conflict with blr rework] Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
AKASHI Takahiro authored
This regeset is intended to be used to get and set a system call number while tracing. There was some discussion about possible approaches to do so: (1) modify x8 register with ptrace(PTRACE_SETREGSET) indirectly, and update regs->syscallno later on in syscall_trace_enter(), or (2) define a dedicated regset for this purpose as on s390, or (3) support ptrace(PTRACE_SET_SYSCALL) as on arch/arm Thinking of the fact that user_pt_regs doesn't expose 'syscallno' to tracer as well as that secure_computing() expects a changed syscall number, especially case of -1, to be visible before this function returns in syscall_trace_enter(), (1) doesn't work well. We will take (2) since it looks much cleaner. Signed-off-by:
AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 26 Nov, 2014 2 commits
-
-
Laura Abbott authored
The head.text section is intended to be run at early bootup before any of the regular kernel mappings have been setup. Parts of head.text may be freed back into the buddy allocator due to TEXT_OFFSET so for security requirements this memory must not be executable. The suspend/resume/hotplug code path requires some of these head.S functions to run however which means they need to be executable. Support these conflicting requirements by moving the few head.text functions that need to be executable to the text section which has the appropriate page table permissions. Tested-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Laura Abbott <lauraa@codeaurora.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
In the arm64 arch_static_branch implementation we place an A64 NOP into the instruction stream and log relevant details to a jump_entry in a __jump_table section. Later this may be replaced with an immediate branch without link to the code for the unlikely case. At init time, the core calls arch_jump_label_transform_static to initialise the NOPs. On x86 this involves inserting the optimal NOP for a given microarchitecture, but on arm64 we only use the architectural NOP, and hence replace each NOP with the exact same NOP. This is somewhat pointless. Additionally, at module load time we don't call jump_label_apply_nops to patch the optimal NOPs in, unlike other architectures, but get away with this because we only use the architectural NOP anyway. A later notifier will patch NOPs with branches as required. Similarly to x86 commit 11570da1 (x86/jump-label: Do not bother updating NOPs if they are correct), we can avoid patching NOPs with identical NOPs. Given that we only use a single NOP encoding, this means we can NOP-out the body of arch_jump_label_transform_static entirely. As the default __weak arch_jump_label_transform_static implementation performs a patch, we must use an empty function to achieve this. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jiang Liu <liuj97@gmail.com> Cc: Laura Abbott <lauraa@codeaurora.org> Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 25 Nov, 2014 10 commits
-
-
Will Deacon authored
Consistently use the plural form for alternatives pr_fmt strings. Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
.exit.* sections may be subject to patching by the new alternatives framework and so shouldn't be discarded at link-time. Without this patch, such a section will result in the following linker error: `.exit.text' referenced in section `.altinstructions' of drivers/built-in.o: defined in discarded section `.exit.text' of drivers/built-in.o Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Laura Abbott authored
The fixmap API was originally added for arm64 for early_ioremap purposes. It can be used for other purposes too so move the initialization from ioremap to somewhere more generic. This makes it obvious where the fixmap is being set up and allows for a cleaner implementation of __set_fixmap. Reviewed-by:
Kees Cook <keescook@chromium.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Laura Abbott <lauraa@codeaurora.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Laura Abbott authored
The function cpu_resume currently lives in the .data section. There's no reason for it to be there since we can use relative instructions without a problem. Move a few cpu_resume data structures out of the assembly file so the .data annotation can be dropped completely and cpu_resume ends up in the read only text section. Reviewed-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by:
Kees Cook <keescook@chromium.org> Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Laura Abbott <lauraa@codeaurora.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Laura Abbott authored
The hyp stub vectors are currently loaded using adr. This instruction has a +/- 1MB range for the loading address. If the alignment for sections is changed the address may be more than 1MB away, resulting in reclocation errors. Switch to using adrp for getting the address to ensure we aren't affected by the location of the __hyp_stub_vectors. Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Laura Abbott <lauraa@codeaurora.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Laura Abbott authored
handle_arch_irq isn't actually text, it's just a function pointer. It doesn't need to be stored in the text section and doing so causes problesm if we ever want to make the kernel text read only. Declare handle_arch_irq as a proper function pointer stored in the data section. Reviewed-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Laura Abbott <lauraa@codeaurora.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
While we currently expect self-hosted debug support to be identical across CPUs, we don't currently sanity check this. This patch adds logging of the ID_AA64DFR{0,1}_EL1 values and associated sanity checking code. It's not clear to me whether we need to check PMUVer, TraceVer, and DebugVer, as we don't currently rely on these fields at all. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
A missing newline in the WARN_TAINT_ONCE string results in ugly and somewhat difficult to read output in the case of a sanity check failure, as the next print does not appear on a new line: Unsupported CPU feature variation.Modules linked in: This patch adds the missing newline, fixing the output formatting. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
It seems that Cortex-A53 r0p4 added support for AIFSR and ADFSR, and ID_MMFR0.AuxReg has been updated accordingly to report this fact. As Cortex-A53 could be paired with CPUs which do not implement these registers (e.g. all current revisions of Cortex-A57), this may trigger a sanity check failure at boot. The AuxReg value describes the availability of the ACTLR, AIFSR, and ADFSR registers, which are only of use to 32-bit guest OSs, and have IMPLEMENTATION DEFINED contents. Given the nature of these registers it is likely that KVM will need to trap accesses regardless of whether the CPUs are heterogeneous. This patch masks out the ID_MMFR0.AuxReg value from the sanity checks, preventing spurious warnings at boot time. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Reported-by:
Andre Przywara <andre.przywara@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Brown authored
The only requirement the scheduler has on cluster IDs is that they must be unique. When enumerating the topology based on MPIDR information the kernel currently generates cluster IDs by using the first level of affinity above the core ID (either level one or two depending on if the core has multiple threads) however the ARMv8 architecture allows for up to three levels of affinity. This means that an ARMv8 system may contain cores which have MPIDRs identical other than affinity level three which with current code will cause us to report multiple cores with the same identification to the scheduler in violation of its uniqueness requirement. Ensure that we do not violate the scheduler requirements on systems that uses all the affinity levels by incorporating both affinity levels two and three into the cluser ID when the cores are not threaded. While no currently known hardware uses multi-level clusters it is better to program defensively, this will help ease bringup of systems that have them and will ensure that things like distribution install media do not need to be respun to replace kernels in order to deploy such systems. In the worst case the system will work but perform suboptimally until a kernel modified to handle the new topology better is installed, in the best case this will be an adequate description of such topologies for the scheduler to perform well. Signed-off-by:
Mark Brown <broonie@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-