Move IS_DEVICE_PATH_NODE into header to share it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jun Nie <jun.nie@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This adds an implementation of the ResetSystemLib library class as
defined in MdeModulePkg. It is used as the platform glue by the generic
ResetSystemRuntimeDxe which lives in the same package.
This implementation is intended to replace the EfiResetSystemLib based
implementation that is deprecated now that we have decided that there is
no longer a reason to keep a different ResetSystem() implementation
under EmbeddedPkg.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
ARM ArmHvcLib looks like it was created from copy of ArmSmcLib which
looks like it was created from a copy of the AArch64 version.
Both of these files include AsmMacroIoLibV8.h instead of
AsmMacroIoLib.h, although since they only use macros that are identical
between the two, there was no functional issue caused by this.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
In order to be able to produce meaningful diagnostic output when taking
synchronous exceptions that have been caused by corruption of the stack
pointer, prepare the EL0 stack pointer and switch to it when handling the
'Sync exception using SPx' exception class.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Currently, we only attempt to walk the call stack and print a backtrace
if the program counter refers to a location covered by a PE/COFF image.
However, regardless of the value of PC, the frame pointer may still have
a meaningful value, and so we can still produce the remainder of the
backtrace.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Add the gEfiDebugImageInfoTableGuid, which is referenced in the code,
to both .INF files describing this module.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Replace the duplicated and outdated code in QuietBoot.c with a reference
to BootLogoLib, which provides the same functionality. This also allows
us to drop all references to IntelFrameworkModulePkg in this module.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Instead of indirecting the reference to the Shell binary via a PCD
that is defined in IntelFrameworkModulePkg, and which invariably
gets set to the same value by all users of this library, refer to
the UEFI Shell application by its declared symbolic GUID.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Commit e7b24ec978 ("ArmPkg/UncachedMemoryAllocationLib: map uncached
allocations non-executable") adds code that manipulates the GCD memory
space attributes of a newly allocated uncached region without checking
whether this region expose these attributes in its capabilities mask.
Given that the intent is to remove executable permissions from the region,
this is a fairly pointless exercise to begin with, regardless of whether
it is correct or not. The reason is that RO/XP memory attributes in the
GCD memory space map or the UEFI memory map are completely disconnected
from the actual mapping permissions used in the page tables.
So instead, invoke the CPU arch protocol directly, and add the non-exec
attributes in the page tables directly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
modsi3.S references the symbol '__divsi3' by '___divsi3' which assumes
the prefix is always required and supported. Use ASM_PFX() instead
to support all compilers.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com>
Some memory attributes are implied by the memory type, e.g., device memory
is always mapped non-executable and cached memory should have the inner
shareable attribute.
In order to prevent unnecessary memory attribute updates of mappings
created early on, make EfiAttributeToArmAttribute() return these implied
attributes in the same way as ArmMmuLib does already. This avoids false
positives when looking for differences between current and desired mapping
attributes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The primary use case for UncachedMemoryAllocationLib is non-coherent DMA,
which implies that such regions are not used to fetch instructions from.
So let's map them as non-executable, to avoid creating a security hole
when the rest of the platform may be enforcing strict memory permissions
on ordinary allocations.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Uncached pool allocations are aligned to the data cache line length under
the assumption that this is sufficient to prevent cache maintenance from
corrupting adjacent allocations. However, the value to use in such cases
is architecturally called the Cache Writeback Granule (CWG), which is
essentially the maximum Dcache line length rather than the minimum.
Note that this is mostly a cosmetical fix, given that the pool allocation
is turned into a page allocation later, and rounded up accordingly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In order to play nice with platforms that use strict memory permission
policies, restore the original mapping attributes when freeing uncached
allocations.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Now that we have the prerequisite functionality available in ArmMmuLib,
wire it up into ArmSetMemoryRegionNoExec, ArmClearMemoryRegionNoExec,
ArmSetMemoryRegionReadOnly and ArmClearMemoryRegionReadOnly. This is
used by the non-executable stack feature that is configured by DxeIpl.
NOTE: The current implementation will not combine RO and XP attributes,
i.e., setting/clearing a region no-exec will unconditionally
clear the read-only attribute, and vice versa. Currently, we
only use ArmSetMemoryRegionNoExec(), so for now, we should be
able to live with this.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
We no longer make use of the ArmMmuLib 'feature' to create aliased
memory ranges with mismatched attributes, and in fact, it was only
wired up in the ARM version to begin with.
So remove the VirtualMask argument from ArmSetMemoryAttributes()'s
prototype, and remove the dead code that referred to it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
... where it belongs, since AARCH64 already keeps it there, and
non DXE users of ArmMmuLib (such as DxeIpl, for the non-executable
stack) may need its functionality as well.
While at it, rename SetMemoryAttributes to ArmSetMemoryAttributes,
and make any functions that are not exported STATIC. Also, replace
an explicit gBS->AllocatePages() call [which is DXE specific] with
MemoryAllocationLib::AllocatePages().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The routines ArmConfigureMmu(), SetMemoryAttributes() [*] and the
various set/clear read-only/no-exec routines are declared as returning
EFI_STATUS in the respective header files, so align the definitions with
that.
* SetMemoryAttributes() is declared in the wrong header (and defined in
ArmMmuLib for AARCH64 and in CpuDxe for ARM)
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Enable the use of strict memory permissions on ARM by processing the
EFI_MEMORY_RO and EFI_MEMORY_XP rather than ignoring them. As before,
calls to CpuArchProtocol::SetMemoryAttributes that only set RO/XP
bits will preserve the cacheability attributes. Permissions attributes
are not preserved when setting the memory type only: the way the memory
permission attributes are defined does not allows for that, and so this
situation does not deviate from other architectures.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Page and section entries in the page tables are updated using the
helper ArmUpdateTranslationTableEntry(), which cleans the page
table entry to the PoC, and invalidates the TLB entry covering
the page described by the entry being updated.
Since we may be updating section entries, we might be leaving stale
TLB entries at this point (for all pages in the section except the
first one), which will be invalidated wholesale at the end of
SetMemoryAttributes(). At that point, all caches are cleaned *and*
invalidated as well.
This cache maintenance is costly and unnecessary. The TLB maintenance
is only necessary if we updated any section entries, since any page
by page entries that have been updated will have been invalidated
individually by ArmUpdateTranslationTableEntry().
So drop the clean/invalidate of the caches, and only perform the
full TLB flush if UpdateSectionEntries() was called, or if sections
were split by UpdatePageEntries(). Finally, make the cache maintenance
on the remapped regions themselves conditional on whether any memory
type attributes were modified.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Currently, any range passed to CpuArchProtocol::SetMemoryAttributes is
fully broken down into page mappings if the start or the size of the
region happens to be misaliged relative to the section size of 1 MB.
This is going to result in memory being wasted on second level page tables
when we enable strict memory permissions, given that we remap the entire
RAM space non-executable (modulo the code bits) when the CpuArchProtocol
is installed.
So refactor the code to iterate over the range in a way that ensures
that all naturally aligned section sized subregions are not broken up.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
To prevent the initial MMU->GCD memory space map synchronization from
stripping permissions attributes [which we cannot use in the GCD memory
space map, unfortunately], implement the same approach as x86, and ignore
SetMemoryAttributes() calls during the time SyncCacheConfig() is in
progress. This is a horrible hack, but is currently the only way we can
implement strict permissions on arbitrary memory regions [as opposed to
PE/COFF text/data sections only]
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This removes the PCD PcdArmUncachedMemoryMask from ArmPkg, along with
any remaining references to it in various platform .DSC files. It is
no longer used now that we removed the virtual uncached pages protocol
and the associated DebugUncachedMemoryAllocationLib library instance.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Virtual uncached pages are simply pages that are aliased using mismatched
attributes, which is not allowed by the ARM architecture. So remove the
protocol and its implementation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The debug implementation of the UncachedMemoryAllocationLib library
class relies on the creation of an uncached alias of a memory range,
while keeping the original cached mapping, but with read-only attributes
to trap inadvertent write accesses.
This is not a terribly good idea, given that the ARM architecture does
not allow mismatched attributes, and so creating them deliberately is
not something we should encourage by doing it in reference code.
So remove the library, and replace all references to it with a reference
to the non-debug version (unless the platform does not require a resolution
for it in the first place, in which case all UncachedMemoryAllocationLib
references can be removed altogether).
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Enable the hardware stack alignment check, as mandated by the UEFI spec.
This ensures that the stack pointer is 16 byte aligned at each instance
where it is used as the base address in a load/store operation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In preparation of enabling stack alignment checking, which is mandated
by the UEFI spec for AARCH64, add the code to manage this bit to ArmLib.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Stack and unstack the frame pointer according to the AAPCS in
AArch64AllDataCachesOperation ().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Since the new DXE page protection for PE/COFF images may invoke
EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes() with only permission
attributes set, add support for this in the AARCH64 MMU code.
Move the EFI_MEMORY_CACHETYPE_MASK macro to a shared location between
CpuDxe and ArmMmuLib so we don't have to introduce yet another
definition.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Currently, we have not implemented support on 32-bit ARM for managing
permission bits in the page tables. Since the new DXE page protection
for PE/COFF images may invoke EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes()
with only permission attributes set, let's simply ignore those for now.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The single user of EfiAttributeToArmAttribute () is the protocol
method EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes(), which uses the
return value to compare against the ARM attributes of an existing mapping,
to infer whether it is actually necessary to change anything, or whether
the requested update is redundant. This saves some cache and TLB
maintenance on 32-bit ARM systems that use uncached translation tables.
However, EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes() may be invoked with
only permission bits set, in which case the implied requested action is to
update the permissions of the region without modifying the cacheability
attributes. This is currently not possible, because
EfiAttributeToArmAttribute () ASSERT()s [on AArch64] on Attributes arguments
that lack a cacheability bit.
So let's simply return TT_ATTR_INDX_MASK (AArch64) or
TT_DESCRIPTOR_SECTION_TYPE_FAULT (ARM) in these cases (or'ed with the
appropriate permission bits). This way, the return value is equally
suitable for checking whether the attributes need to be modified, but
in a way that accommodates the use without a cacheability bit set.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Current Arm CpuDxe driver uses EFI_MEMORY_WP for write protection,
according to UEFI spec, we should use EFI_MEMORY_RO for write protection.
The EFI_MEMORY_WP is the cache attribute instead of memory attribute.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
PcdGenericWatchdogControlBase & PcdGenericWatchdogRefreshBase
are declared as UINT32 values in ArmPkg.dec, but for platforms
with addresses in the memory range above 4GB this causes build
error F000: Too large PCD value for datum type [UINT32]
of PCD gArmTokenSpaceGuid.PcdGenericWatchdogControlBase
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Alexei Fedorov <alexei.fedorov@arm.com>
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=361
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This reverts commit d32702d2c2.
Using a pool allocation for the root translation table seemed like
a good idea at the time, but as it turns out, such allocations are
handled in a way that makes them unsuitable for this purpose: they
are backed by HOBs that don't remain in the same place during the
various PI phase changes, which means the address programmed into
the TTBR register is no longer valid, and may refer to memory that
is reported as available to the OS.
So switch back to using a page based allocation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The generic timer support libraries call the actual system register
accessor function via a single pair of functions ArmArchTimerReadReg()
and ArmArchTimerWriteReg(), which take an enum argument to identify
the register, and return output values by pointer reference.
Since these functions are never called with a non-immediate argument,
we can simply replace each invocation with the underlying system register
accessor instead. This is mostly functionally equivalent, with the
exception of the bounds check for the enum (which is pointless given the
fact that we never pass a variable), the check for the presence of the
architected timer (which only makes sense for ARMv7, but is highly unlikely
to vary between platforms that are similar enough to run the same firmware
image), and a check for enum values that refer to the HYP view of the timer,
which we never referred to anywhere in the code in the first place.
So get rid of the middle man, and update the ArmGenericTimerPhyCounterLib
and ArmGenericTimerVirtCounterLib implementations to call the system
register accessors directly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Commit 0a99a65d2c ("fix incorrect device address of double buffer")
retained an explicit cast on the variable "Buffer" which became
incorrect with the other changes, leading to compilation failures
with some toolchains. Drop the cast.
Contributed-under: TianoCore Contribution Agreement 1.0
Reported-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Some devices, such as the Raspberry Pi3, have a fixed offset between memory
addresses as seen by the host and as seen by the other bus masters. So add
a new PCD that allows this fixed offset to be recorded, and to be used when
returning device addresses from the DmaLib mapping routines.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In preparation of adding support to ArmDmalib for DMA bus masters whose
view of memory is offset by a constant compared to the CPU's view, clean
up some abuse of the device address.
The device address is not defined in terms of the CPU's address space,
and so it should not be used in CopyMem () or cache maintenance operations
that require a valid mapping. This not only applies to the above use case,
but also to the DebugUncachedMemoryAllocationLib that unmaps the
primary, cached mapping of an allocation, and returns a host address
which is an uncached alias offset by a constant.
Since we should never access the device address from the CPU, there is
no need to record it in the MAPINFO struct. Instead, record the buffer
address in case of double buffering, since we do need to copy the contents
(in case of a bus master write) and free the buffer (in all cases) when
DmaUnmap() is called.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
If double buffering is not required in DmaMap(), the returned device
address is passed through ConvertToPhysicalAddress () to convert the
host address (which in case of DebugUncachedMemoryAllocationLib is not
1:1 mapped) to a physical address, which is what a device would expect
to be able to perform DMA.
By the same reasoning, a double buffer allocated using DmaAllocateBuffer ()
should be converted in the same way, considering that the buffer is allocated
using UncachedAllocatePages (), to which the above equally applies.
So add the missing ConvertToPhysicalAddress () invocation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Instead of depending on ArmLib to retrieve the CWG directly, use
the DMA buffer alignment exposed by the CPU arch protocol. This
removes our dependency on ArmLib, which makes the library a bit
more architecture independent.
While we're in there, rename gCpu to mCpu to better reflect its
local scope, and reflow some lines that we're modifying anyway.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Translation table walks are always cache coherent on ARMv8-A, so cache
maintenance on page tables is never needed. Since there is a risk of
loss of coherency when using mismatched attributes, and given that memory
is mapped cacheable except for extraordinary cases (such as non-coherent
DMA), restrict the page table walker to performing cacheable accesses to
the translation tables.
For DEBUG builds, retain some of the logic so that we can double check
that the memory holding the root translation table is indeed located in
memory that is mapped cacheable.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The LinuxLoader application boots Linux in a way that prevents the OS
from accessing UEFI runtime services. Since we have better ways now
of invoking the kernel (via GRUB, or directly via the kernel's UEFI
stub), remove the obsolete LinuxLoader so that people will no longer
mistake it for a suitable reference of how to invoke the OS from UEFI.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
The DmaBufferAlignment currently defaults to 4, which is dangerously
small and may result in lost data on platforms that perform non-coherent
DMA. So instead, take the CWG value from the cache info registers.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This is ancient cruft that is no longer used, so remove it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The GCC ARM builds have access to ADRL/LDRL macros that emit relative
symbol references, i.e., references that do not require fixing up at
load time (or FV generation time for XIP modules)
Implement equivalent functionality for RVCT: note that this does not
use movw/movt pairs, but the more compatible add/add/add or add/add/ldr
sequences (which Clang does not support, unfortunately, hence the use
of movw/movt for the GCC toolchain family)
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Define DISABLE_NEW_DEPRECATED_INTERFACES on the compiler command line by
default, to prevent deprecated interfaces from being used in core EDK2
code.
Bug: https://bugzilla.tianocore.org/show_bug.cgi?id=164
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Drop the include of AsmMacroIoLib.h, which contains GCC preprocessor macros
that RVCT does not use or require, given it has its own AsmMacroIoLib.inc
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
ArmPkg.dsc was a bit out of date, and some modules added over the past
years had not been added to its [Components] section yet.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This missing dependency has gone unnoticed until now, but it is breaking
the Omap35xxPkg.dsc build.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
AsciiStrCat() is deprecated / disabled under the
DISABLE_NEW_DEPRECATED_INTERFACES feature test macro.
The caller of CpsrString() is required to pass in "ReturnStr" with 32
CHAR8 elements. (DefaultExceptionHandler() complies with this.) "Str" is
used to build "ReturnStr" gradually. Just before calling AsciiStrCat(),
"Str" points to the then-terminating NUL character in "ReturnStr".
The difference (Str - ReturnStr) gives the number of non-NUL characters
we've written thus far, hence (32 - (Str - ReturnStr)) yields the number
of remaining bytes in ReturnStr, including the ultimately terminating NUL
character.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Michael Zimmermann <sigmaepsilon92@gmail.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=164
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=165
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
AsciiStrCat() is deprecated / disabled under the
DISABLE_NEW_DEPRECATED_INTERFACES feature test macro.
The "Str" variable serves no particular purpose in the MRegList() and
ThumbMRegList() functions; replace it with the pointed-to "mMregListStr" /
"mThumbMregListStr" global variable (as appropriate), so that the new
AsciiStrCatS() calls are as clear as possible.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Michael Zimmermann <sigmaepsilon92@gmail.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=164
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=165
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
All users have moved to the generic or accelerated versions in MdePkg,
so remove the obsolete BaseMemoryLibStm.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
During Mmu initialization in the CpuDxe, for a page table any bits set
in the 'NextSectionAttributes' are garbage and were set from bits that
are actually part of the pagetable address. We clear it out to zero
so that the SyncCacheConfigPage will use the page attributes instead
of trying to convert the (bogus) section attributes into page
attributes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Kurt Kennett <kurt.kennett@microsoft.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Without an explicit .align directive, the Clang assembler defaults to
no alignment, which may result in instructions appearing misaligned in
the final executable. So use word alignment in all cases.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
As reported by Eugene, the practice of sizing the address space in the
virtual memory system based on the maximum address in the table passed
to ArmConfigureMmu() is problematic, since it fails to take into account
the fact that the GCD memory space may be extended at a later time, both
for memory and for MMIO. So instead, choose the VA size identical to the
GCD memory map size, which is based on PcdPrePiCpuMemorySize on ARM
systems.
Reported-by: Eugene Cohen <eugene@hp.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Currently, we allocate a full page for the root translation table, even
if the configured translation only requires two entries (16 bytes) for
the root level, which happens to be the case for a 40 bit VA. Likewise,
for a 36-bit VA space, the root table only needs 16 entries of 8 bytes
each, adding up to 128 bytes.
So switch to a pool allocation for the root table if we can, but take into
account that the architecture requires it to be naturally aligned to its
size, i.e., a 64 byte table requires 64 byte alignment, whereas pool
allocations in general are only guaranteed to be aligned to 8 bytes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In commit 7d189f99d8 ("ArmPkg/Mmu: Fix bug of aligning new allocated
page table"), we fixed a flaw in the logic regarding alignment of newly
allocated translation table pages. However, we all failed to spot that
aligning page based allocations to page size is rather pointless to
begin with, so simply allocate a single page each time we add new pages
to the translation tables.
Also, drop the unnecessary cast.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The relations between T0SZ, the number of translation levels and the
size/alignment of the root table can be expressed in simple arithmetic
expressions, so get rid of the lookup table.
Note that this disregards the fact that the maximum value of T0SZ is
39 not 42 (as one would expect for the smallest VA size using 2 levels)
but since this corresponds to a VA size of 32 MB and 4 MB, respectively,
neither of which are sufficient to run UEFI, we can safely ignore the
distinction.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The ArmGicLib API function GicGetCpuRedistributorBase () declares
GicCpuRedistributorBase to iterate over the redistributors of all
CPUs, but then inadvertently advances GicRedistributorBase instead.
Reported-by: "Oliyil Kunnil, Vishal" <vishalo@qti.qualcomm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
As reported by Vishal, the new backtrace output would be more useful if
it did not contain the full absolute path of each module in the list.
So strip off everything up to the last forward slash or backslash in the
string.
Example output:
IRQ Exception at 0x000000005EF110E0
DxeCore.dll loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EF121F0) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EF1289C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFB6B4) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFAA44) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFB450) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF938C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF8D04) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFA8E8) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF3C14) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF3E48) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EF0C838) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEEF70C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEEE93C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEEE024) loaded at 0x000000005EEED000
Suggested-by: "Oliyil Kunnil, Vishal" <vishalo@qti.qualcomm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
For historical reasons, the files under ArmLib are split up into 'common'
files under Common/, containing common C files as well as AArch64 and Arm
specific asm files, and ArmV7 and AArch64 files under ArmV7/ and AArch64/,
respectively. This presumably dates back to the time when ArmLib supported
different revisions of the 32-bit architecture (i.e., pre-V7)
Since the PI spec requires V7 or later, we can simplify this to Arm/ and
AArch64, which aligns ArmLib with the majority of other modules that carry
ARM or AArch64 specific code.
So move the files around so that shared files live at the same level as
ArmBaseLib.inf, and ARM/AArch64 specific files live in Arm/ or AArch64/,
respectively.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The ArmBaseLib timer code does not depend on MemoryAllocationLib at
all, so remove the #includes referring to it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This removes the following ArmLib implementation, which were, apart from
the fact that they targeted either ARM or AARCH64, fully identical:
ArmPkg/Library/ArmLib/AArch64/AArch64Lib.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibPei.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibPrePi.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibSec.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7Lib.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7LibPrePi.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7LibSec.inf
Only ArmBaseLib remains, which can fulfil the dependencies upon each of
the listed flavors.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Introduce a new ArmLib version ArmBaseLib, which encapsulates the ARM
version ArmV7Lib and the AArch64 version AArch64Lib.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Remove the NULL instance of ArmLib: it is not currently used, and its
usefulness its dubious.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
According to the ACPI 6.0/6.1 spec, the physical base address of GICC,
GICD, GICR and GIC ITS is 64-bit. So change the type of the various GIC
base address PCDs to 64-bit, and fix up all users.
Contributed-under: TianoCore Contribution Agreement 1.0
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Dennis Chen <dennis.chen@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
When dumping the CPU state after an unhandled fault, walk the stack
frames and decode the return addresses so we can show a minimal
backtrace. Unfortunately, we do not have sufficient information to
show the function names, but at least we can see the modules and the
return addresses inside the modules.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Clang does not like separate definitions for the __alias__ and the
__weak__ attributes, so merge the definitions into one.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
After the recent update of CompilerIntrinsicsLib, our memset() is no
longer emitted as a weak symbol. On ARM, this may cause problems when
combining this library with another library that supplies memset() [e.g.,
CryptoPkg/IntrinsicLib], due to the fact that the object also supplies
the __aeabi_memXXX entry points, which can only be satisfied by this
object. So make our memset() weak again, to let the other implementation
take precedence.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
BaseMemoryLib has recently been extended with an API function
IsZeroBuffer(), so copy the default implementation into BaseMemoryLibStm
as well.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
BaseMemoryLib has recently been extended with an API function
IsZeroGuid(), so copy the default implementation into BaseMemoryLibStm
as well.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The BaseMemoryLibVstm implementation of BaseMemoryLib is ARM only, uses
the NEON register file despite the fact that the UEFI spec does not allow
it, and is currently not used anywhere. So remove it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This replaces the various implementations of memset and memcpy,
including the ARM RTABI ones (__aeabi_mem[set|clr]_[|4|8]) with
a single C implementation for each. The ones we have are either not
very sophisticated (ARM), or they are too sophisticated (memcpy() on
AARCH64, which may perform unaligned accesses) or already coded in C
(memset on AArch64).
The Tianocore codebase mandates the explicit use of its SetMem() and
CopyMem() equivalents, of which various implementations exist for use
in different contexts (PEI, DXE). Few compiler generated references to
these functions should remain, and so our implementations in this BASE
library should be small and usable with the MMU off.
So replace them with a simple C implementation that builds correctly
on GCC/AARCH64, CLANG/AARCH64, GCC/ARM, CLANG/ARM and RVCT/ARM.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections. Note that in some cases, various entry points
refer to different parts of the same routine, so in those cases,
the files have been left untouched.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The C language is powerful enough to implement a function that does
absolutely nothing, so there is no need to resort to implementations
in assembler for various toolchains/architectures.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This introduces the ASM_FUNC() macro to annotate function entry points
in assembler files. This allows us to add additional metadata that
marks a function entry point as a function, and allows us to emit
a .section directive for each function, which makes it possible for
the linker to drop unreferenced code.
In addition, introduce a couple of utility macros that we can use to
clean up the code.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This removes the various Mmio ASM macros that are not used anywhere in
the code, and removes some variants of LoadConstant... () that are not
used anywhere either.
Note that these MmioXxx() implementations are unrelated to the C versions
defined in MdePkg. These are strictly intended for use in assembler, and
no such uses remain.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The function ArmReplaceLiveTranslationEntry() has been moved to
ArmMmuLib, so remove the old implementation from ArmLib.
Note that the new implementation was not exported from the object file,
and so references to it were satisfied by the old version residing in
ArmLib. Since we are removing that one, we need to export the new one
at the same time to prevent the linker from bailing with undefined
reference errors.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This commit fixes a bug in the GIC v2 and v3 drivers where the GICC_EOIR
(End Of Interrupt Register) is written twice for a single interrupt.
GicV(2|3)IrqInterruptHandler() calls the Interrupt Handler and then
GicV(2|3)EndOfInterrupt() on exit:
InterruptHandler = gRegisteredInterruptHandlers[GicInterrupt];
if (InterruptHandler != NULL) {
// Call the registered interrupt handler.
InterruptHandler (GicInterrupt, SystemContext);
} else {
DEBUG ((EFI_D_ERROR, "Spurious GIC interrupt: 0x%x\n", GicInterrupt));
}
GicV2EndOfInterrupt (&gHardwareInterruptV2Protocol, GicInterrupt);
although gInterrupt->EndOfInterrupt() can be expected to have already
been called by InterruptHandler() [which is the case for the primary
in-tree handler in TimerDxe]
The fix moves the EndOfInterrupt() call inside the else case for
unregistered/spurious interrupts. This removes a potential race
condition that might have lost interrupts.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Alexei Fedorov <alexei.fedorov@arm.com>
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The ARM compiler intrinsics library defines __aeabi_memset() and
memset() in the same object, which means that both will be pulled
in if either is referenced.
The IntrinsicLib in CryptoPkg defines its own, preferred memset(),
which may clash with our memset(). So make our version weak.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Building ArmSoftFloatLib with LTO results in errors like
.../bin/ld: softfloat.obj: plugin needed to handle lto object
.../bin/ld: __aeabi_dcmpge.obj: plugin needed to handle lto object
.../bin/ld: __aeabi_dcmplt.obj: plugin needed to handle lto object
.../bin/ld: internal error ../../ld/ldlang.c 6299
This library is only linked by OpensslLib at the moment, and only
marginally used at runtime, so just disable LTO for it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
GCC in LTO mode interoperates poorly with non-standard libraries that
provide implementations of compiler intrinsics such as memcpy/memset
or the stack protector entry points. Such libraries need to be built
in non-LTO mode, and then referenced explicitly on the linker command
line using a -plugin-opt=-pass-through=-lxxx linker option.
However, if these intrinsics are also referenced directly, the LTO
version of the code will be pulled in, and will happily satisfy all
other references to the same symbol.
So add a pair of glue libraries, for ARM and AARCH64, that reference
the known intrinsics. Since the binaries live under ArmPkg directly,
we can reference them in tools_def.txt. Under LD garbage collection,
the object itself will be pruned, and so will the intrinsics that end
up unused by the module.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
ArmLib defines a prototype for the ArmReadSctlr() function, but the
AArch64 implementation is missing. So add it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: John Powell <john.powell@arm.com>
Signed-off-by: Supreeth Venkatesh <supreeth.venkatesh@arm.com>
[ardb: update commit log]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Add the Cortex-A72 CPU type which is used in JunoR2.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Unlike SGIs and PPIs, which are private to the CPU and are managed at
the redistributor level (which is also a per-CPU construct), shared
interrupts (SPIs) are shared between all CPUs, and therefore managed at
the distributor level (just as on GICv2).
Reported-by: Narinder Dhillon <ndhillonv2@gmail.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Commit fafb7e9c11 ("ArmPkg: correct TTBR1_EL1 settings in TCR_EL1")
introduced a symbolic constant TCR_TG1_4KB which resolves to (2 << 30),
and ORs it into the value to be written into TCR_EL1 (if executing at
EL1). Since the constant is implicitly typed as signed int, and has the
sign bit set, the promotion that occurs when casting to UINT64 results
in a TCR value that has bits [63:32] all set, which includes mostly
RES0 bits but also the TBIn, AS and IPS fields.
So explicitly redefine all TCR related constants as 'unsigned long'
types, using the UL suffix. To avoid confusion in the future, the
inappropriately named VTCR_EL23_xxx constants have the leading V
removed, and the actual VTCR_EL2 related constants are dropped, given
that we never configure stage 2 translation in UEFI.
Reported-by: Vishal Oliyil Kunnil <vishalo@qti.qualcomm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
This introduces a special version of ArmMmuLib for PEIMs that takes care
only to perform cache maintenance on the live entry replacement routine
if the module is not executing in place. Not only is such cache maintenance
unnecessary in that case, it may be actively harmful on some systems that
fail to tolerate cache maintenance operations on NOR flash regions.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Switch all users of ArmLib that depend on the MMU routines to the new,
separate ArmMmuLib. This needs to occur in one go, since the MMU
routines are removed from ArmLib build at the same time, to prevent
conflicting symbols.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This base library encapsulates the MMU manipulation routines that have been
factored out of ArmLib. The functionality covers initial creation of the 1:1
mapping in the page tables, and remapping regions to change permissions or
cacheability attributes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Introduce the library class ArmMmuLib, which encapsulates the functionality
to set up and modify page table entries.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
SErrors (formerly called asynchronous aborts) are a distinct class of
exceptions that are not closely tied to the currently executing
instruction. Since execution may be able to proceed in such a condition,
this class of exception is masked by default, and software needs to unmask
it explicitly if it is prepared to handle such exceptions.
On DEBUG builds, we are well equipped to report the CPU context to the user
and it makes sense to report an SError as soon as it occurs rather than to
wait for the OS to take it when it unmasks them, especially since the current
arm64/Linux implementation simply panics in that case. So unmask them when
ArmCpuDxe loads.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Putting DEBUG () code after an ASSERT (FALSE) statement is not very
useful, since the code will be unreachable on DEBUG builds and compiled
out on RELEASE builds. So move the ASSERT () statement after it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reassign all interrupts to non-secure Group-1 if the GIC has its DS
(Disable Security) bit set. In this case, it is safe to assume that we
own the GIC, and that no other firmware has performed any configuration
yet, which means it is up to us to reconfigure the interrupts so they
can be taken by the non-secure firmware.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
On some platforms, performing cache maintenance on regions that are backed
by NOR flash result in SErrors. Since cache maintenance is unnecessary in
that case, create a PEIM specific version that only performs said cache
maintenance in its constructor if the module is shadowed in RAM. To avoid
performing the cache maintenance if the MMU code is not used to begin with,
check that explicitly in the constructor.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This implements the platform glue for the new generic BDS implementation.
It is based on the ArmVirtQemu version, with the QEMU references removed.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Instead of cleaning the data cache to the PoU by virtual address and
subsequently invalidating the entire I-cache, invalidate only the
range that we just cleaned. This way, we don't invalidate other
cachelines unnecessarily.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
When we split a block entry into a table entry, the UXN/PXN/XN permission
attributes are inherited both by the new table entry and by the new block
entries at the next level down. Unlike the NS bit, which only affects the
next level of lookup, the XN table bits supersede the permissions of the
final translation, and setting the permissions at multiple levels is not
only redundant, it also prevents us from lifting XN restrictions on a
subregion of the original block entry by simply clearing the appropriate
bits at the lowest level.
So drop the code that sets the UXN/PXN/XN bits on the table entries.
Reported-by: "Oliyil Kunnil, Vishal" <vishalo@qti.qualcomm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
DmaMap () only allows uncached mappings to be used for creating consistent
mappings with operation type MapOperationBusMasterCommonBuffer. However,
if the buffer passed to DmaMap () happens to be aligned to the CWG, there
is no need for a bounce buffer, and we perform the cache maintenance
directly without ever checking if the memory attributes of the buffer
adhere to the API.
So add some debug code that asserts that the operation type and the memory
attributes are consistent.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In the DmaMap () operation, if the region to be mapped happens to be
aligned to the Cache Writeback Granule (CWG) (whose value is typically
64 or 128 bytes and 2 KB maximum), we remap the memory as uncached.
Since remapping memory occurs at page granularity, while the buffer and the
CWG may be much smaller, there is no telling what other memory we affect
by doing this, especially since the operation is not reverted in DmaUnmap().
So remove the remapping call.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
DmaMap () operations of type MapOperationBusMasterCommonBuffer should
return a mapping that is coherent between the CPU and the device. For
this reason, the API only allows DmaMap () to be called with this operation
type if the memory to be mapped was allocated by DmaAllocateBuffer (),
which in this implementation guarantees the coherency by using uncached
mappings on the CPU side.
This means that, if we encounter a cached mapping in DmaMap () with this
operation type, the code is either broken, or someone is violating the
API, but simply proceeding with a double buffer makes no sense at all,
and can only cause problems.
So instead, actively reject this operation type for cached memory mappings.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Comparing a GCD attribute field directly against EFI_MEMORY_UC and
EFI_MEMORY_WT is incorrect, since it may have other bits set as well
which are not related to the cacheability of the region. So instead,
test explicitly against the flags EFI_MEMORY_WB and EFI_MEMORY_WT,
which must be set if the region may be mapped with cacheable attributes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
We manage to use both an AND operation with 'gCacheAlignment - 1' and a
modulo operation with 'gCacheAlignment' in the same compound if statement.
Since gCacheAlignment is a global of which the compiler cannot guarantee
that it is a power of two, simply use the AND version in both cases.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The allocation function UncachedAllocatePages () may return NULL, in
which case our implementation of DmaAllocateBuffer () should return
EFI_OUT_OF_RESOURCES rather than silently ignoring the NULL value and
returning EFI_SUCCESS.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This adds a partial stack dump (256 bytes at either side of the stack
pointer) to the CPU state dumping routine that is invoked when taking an
unexpected exception. Since dereferencing the stack pointer may itself
fault, ensure that we don't enter the dumping routine recursively.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
The default exception handler, which is essentially the one that is invoked
for unexpected exceptions, ends with an ASSERT (FALSE), to ensure that
execution halts after dumping the CPU state. However, ASSERTs are compiled
out in RELEASE builds, and since we simply return to wherever the ELR is
pointing, we will not make any progress in case of synchronous aborts, and
the same exception will be taken again immediately, resulting in the string
'Exception at 0x....' to be printed over and over again.
So use an explicit deadloop instead.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
The CpuIo2 protocol is required by the generic PciHostBridgeDxe driver,
which relies on it to back its own I/O and MMIO operations.
Since ARM has no native I/O port equivalent, such accesses can only
originate from PCI drivers, and the PCI I/O space is translated to MMIO
in this case.
So we can implement this protocol using MMIO operations only, and take
the PCI I/O translation offset into account when performing I/O port
accesses.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The PCI related PCDs are not platform specific, and architectural
protocols such as CpuIo2 are based on PCI provided MMIO to IO
translation, so these PCDs belong in ArmPkg not ArmPlatformPkg.
NOTE: this *WILL* break some out-of-tree platforms, the fix is changing
all consumers of gArmPlatformTokenSpaceGuid.PcdPci* to
gArmTokenSpaceGuid.PcdPci*
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
mGicNumInterrupts is the total number of interrupts, so the interrupt
ID equal to mGicNumInterrupts is also invalid.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
On ARM, manipulating live page tables is cumbersome since the architecture
mandates the use of break-before-make, i.e., replacing a block entry with
a table entry requires an intermediate step via an invalid entry, or TLB
conflicts may occur.
Since it is not generally feasible to decide in the page table manipulation
routines whether such an invalid entry will result in those routines
themselves to become unavailable, use a function that is callable with
the MMU off (i.e., a leaf function that does not access the stack) to
perform the change of a block entry into a table entry.
Note that the opposite should never occur, i.e., table entries are never
coalesced into block entries.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Now XN attribute will be set automatically if the region is declared
as device memory. However, the function ArmMemoryAttributeToPageAttribute
is to get attribute for block and page descriptors, not for table
descriptors, so attribute TT_TABLE_*XN does not really take effect.
Need to use TT_*XN_MASK instead.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Some minor typographical problems were noticed during previous commits.
This change corrects those, and contains no functional modifications.
The changes are in comments, and one diagnostic message.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The TimerFreq variable in the TimerConstructor() is unused in RELEASE
builds since ASSERTs are then disabled.
The only use of the variable (in the ASSERT) is replaced by a direct
invocation of the function previously used to set it.
NOTE: The build tools suppress warnings of this using compiler options
eg. -Wno-unused-but-set-variable for GCC toolchain or
--diag_suppress=550 for RVCT toolchain.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
FirmwarePerformanceDxe.c utilizes the Timer Library function
GetTimeInNanoSecond() which was not implemented by the ArmArchTimerLib.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This refactors some timer code to define MultU64xN as a preprocessor
symbol rather than a function pointer, and to factor out the code that
obtains the timer frequency into GetPlatformTimerFreq ().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.0
[ard.biesheuvel: split off from 'add GetTimeInNanoSecond() to ArmArchTimerLib']
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The function ArmClearMemoryRegionReadOnly() was supposed to undo the
effect of ArmSetMemoryRegionReadOnly(), but instead, it sets the permissions
to EL0-no access, EL1-read-only. Since the EL0 bit should be 1 to align
with EL2/3 (where the bit is SBO), use TT_AP_RW_RW instead, which makes the
entry read-write for EL0 when executing at EL1, and read-write for all other
levels.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This replaces the somewhat opaque preprocessor based stack/unstack macros
with open coded ldp/stp sequences to preserve the interrupted context
before handing over to the exception handler in C.
This removes various arithmetic operations on the stack pointer, and
reduces the exception return critical section to its minimum size (i.e.,
the bare minimum required to populate the ELR and SPSR registers and invoke
the eret).
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
If we are using the vector table in place, there is no need to make an
indirect call to the common handler routine from the vector table entries,
so just use a straight branch instruction in that case.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
The global gArmRelocateVectorTable is a build time constant, but due to
its external linkage and lack of constness, the compiler does not see that.
So turn it into a static boolean, and at the same time, make the function
CopyExceptionHandlers() (which is only called if gArmRelocateVectorTable is
set) static as well, so that the compiler can eliminate it completely if
we are using the vector table in place.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
ESR and FAR are populated by the hardware upon exception entry, and
describe the exception, not the interrupted context. So there is no point
in restoring their values before returning from the exception.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
We have three code paths to stack/unstack the exception context, one for
each of EL3, EL2 and EL1. However, they all access the same copy of FPSR
so move that access to the common path.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Unlike the AArch32 vector table, which has room for a single instruction
for each exception type, the AArch64 exception table has 128 byte slots,
which can easily hold the shared prologues that are emitted out of line.
So refactor this code into a single macro, and expand it into each vector
table slot. Since the address of the command handler entry point is no
longer patched in by the C code, we can just emit the literal into each
vector entry directly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
The macros EL1_OR_EL2() and EL1_OR_EL2_OR_EL3() allow conditional execution
of assembly sequences based on the current exception level, by jumping to
caller supplied labels 1f, 2f or 3f. However, the jump to 1f is actually
a fallthrough, which means the EL1 code needs to follow right after the
macro invocation, and the 1f label is ignored.
So let's fix this by making all jumps explicit.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Use the new ARM/AArch64 implementation of the base
CpuExceptionHandlerLib library from CpuDxe to centralize
exception handling.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Introduce ARM and AArch64 instances of the CpuExceptionHandlerLib which
provides exception handling and registration of handlers regardless of
execution phase.
Two variants of the ArmExceptionLib are provided: one where exception
handlers reside within the module (meeting appropriate architectural
alignment requirements for the vector table) and another one that will
relocate a copy of thee xception handlers to an address specified by
PcdCpuVectorBaseAddress. The ArmRelocateExceptionLib is intended for use
in cases where ArmExceptionLib is too large for the application
(uncompressed XIP images) as driven by the vector table alignment padding.
The AArch64 build of this library supports execution at EL1, EL2, and EL3
exception levels.
Tested on ARM, and AArch64 with SEC, DXE Core, and CpuDxe modules.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Add ArmReadHcr() to ArmLib to enable read-modify-write of the HCR system
register.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Architecturally, the TTBCR register value is undefined at reset for
Non-Secure.
On some platforms the reset value for TTBCR is not zero and
this causes a data abort exception once the MMU is enabled.
This patch configures the TTBCR register to enable translation table
walk using TTBR0.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Update the CpuDxe driver to remove an assumption that it is the only
component modifying interrupt state since this can be done through BaseLib
as well. Instead of using a global variable for last interrupt state we
now check the current PSTATE value directly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The AArch64 DAIF bits are different for reading (mrs) versus writing
(msr). The bitmask definitions assumed they were the same causing
incorrect results when trying to determine the current interrupt
state through ArmGetInterruptState.
The logic for interpreting the DAIF read data using the csel instruction
was also incorrect and is fixed.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Modify the DefaultExceptionHandler (uefi-variant) so it can be used by
DxeCore (via CpuExceptionHandlerLib) where the debug info table is not
yet published at library constructor time.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Some updates to SCR can cause a problem which manifests as an undefined opcode exception.
This may be when a speculative secure instruction fetch happens after the NS bit is set.
An isb is required to make the register change take effect fully.
Contributed-under: Tianocore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <Evan.Lloyd@arm.com>
Reviewed-by: Sami Mujawar <Sami.Mujawar@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Problems have been encountered because some of the source files have
execute permission set. This can cause git to report them as changed
when they are checked out onto a file system with inherited permissions.
This has been seen using Cygwin, MinGW and PowerShell Git.
This patch makes no change to source file content, and only aims to
correct the file modes/permissions.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19778 6f19259b-4bc3-4df7-8a09-765794883524
The RVCT compiler may emit calls to the various __aeabi_c?cmp??
functions, which return their results via the CPU condition flags
C and Z. According to ARM doc IHI 0043D 'Run-time ABI for the ARM
architecture':
The 3-way comparison functions c*cmple, c*cmpeq and c*rcmple return
their results in the CPSR Z and C flags. C is clear only if the operands
are ordered and the first operand is less than the second. Z is set only
when the operands are ordered and equal.
Add implementations for the double and float variants of the above.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19327 6f19259b-4bc3-4df7-8a09-765794883524
Unfortunately, Clang does not support the use of symbol references in .org
directives, and bails with the following error message when it encounters
them:
<...>:error: expected assembly-time absolute expression
.org DebugAgentVectorTable + 0x000
So replace the .org arguments with absolute values, and move the whole
vector table into a subsection with the appropriate alignment, and
starting at .org 0x0. This gives the same protection with respect to
entries that exceed 128 bytes, in a way that Clang supports as well.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19303 6f19259b-4bc3-4df7-8a09-765794883524
Commit SVN r18778 made all mappings of normal memory (inner) shareable,
even on hardware that implements shareability as uncached accesses.
The original concerns that prompted the change, regarding coherent DMA
and virt guests migrating between CPUs, do not apply to such hardware,
so revert to the original behavior in that case.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19285 6f19259b-4bc3-4df7-8a09-765794883524
The -fno-tree-vrp option is not required for GCC 4.8 or later, and is not
supported by CLANG. So restrict its use to GCC 4.6 and 4.7, which are the
oldest versions we support for ARM.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19283 6f19259b-4bc3-4df7-8a09-765794883524
The open coded access to co-processor #10 to set FPEXC is not supported
by the CLANG assembler, but the architecturally correct VMSR instruction
is not supported by older binutils. So keep the former unless __clang__
is defined.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19282 6f19259b-4bc3-4df7-8a09-765794883524
CLANG for ARM may emit calls to __aeabi_memset(), which is subtly different
from the default memset() [arguments 2 and 3 are reversed]
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19281 6f19259b-4bc3-4df7-8a09-765794883524
The CLANG assembler does not support the legacy, non-unified assembler syntax,
i.e., it does not support the reordering of the condition suffixes with the
increment/decrement before/after or byte/word suffixes, and it does not
recognize the 'empty descending' (ED) suffix at all. So move to the unified
syntax, and replace 'empty descending' with 'decrement after' or 'increment
before' as appropriate.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19280 6f19259b-4bc3-4df7-8a09-765794883524
In the function ArmGicEnableDistributor (), the Affinity Routing Enable
(ARE) bit, which essentially defines whether the GIC runs in v2 or v3
mode, is inadvertently cleared when enabling the GIC distributor if it
is running in v3 mode. So fix that.
Reported-by: Supreeth Venkatesh <Supreeth.Venkatesh@arm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19274 6f19259b-4bc3-4df7-8a09-765794883524
Since we do not support anything below ARMv7, let's promote the ARMv6
exception handling code in CpuDxe to the only version we provide for
ARM. This means we can drop the unused ARMv4 version.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19273 6f19259b-4bc3-4df7-8a09-765794883524
This patch updates the ArmPkg variant of InvalidateInstructionCacheRange to
flush the data cache only to the point of unification (PoU). This improves
performance and also allows invalidation in scenarios where it would be
inappropriate to flush to the point of coherency (like when executing code
from L2 configured as cache-as-ram).
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Added AARCH64 and ARM/GCC implementations of the above.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19174 6f19259b-4bc3-4df7-8a09-765794883524
This has the effect of splitting assembly functions into their own sections
so the linker can remove unused ones to save space.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@gmail.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19109 6f19259b-4bc3-4df7-8a09-765794883524
In response to Leif's request earlier, this adds a new RVCT assembler
macro to centralize the exporting of assembly functions including the
EXPORT directive (so the linker can see it), the AREA directive (so
it's in its own section for code size reasons) and the function label
itself.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19098 6f19259b-4bc3-4df7-8a09-765794883524
In SVN 18756 ("disallow whole D-cache maintenance operations")
InvalidateInstructionCache was modified to remove the full data cache
clean but left the full instruction cache invalidate. The change was
made to address issues in the set/way clean methodology but the
resulting code could lead someone to a painful debug. If a component
called this function, the proper code would not be flushed to the PoU,
since the intent of this function is not only to invalidate the I-cache
but to provide coherency after code loading / modification. This change
simply places an ASSERT(FALSE) in this function to avoid this hazard.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19084 6f19259b-4bc3-4df7-8a09-765794883524
The ARM softfloat library in ArmSoftfloatLib currently does not build
under RVCT, simply because the code includes system header files that
RVCT does not provide. However, nothing exported by those include files
is actually used by the library when built in SOFTFLOAT_FOR_GCC mode,
so we can just drop all of them.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19031 6f19259b-4bc3-4df7-8a09-765794883524
In order to support software floating point in the context of
DXE drivers etc, this factors out the core ARM softfloat support
into a separate library.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19030 6f19259b-4bc3-4df7-8a09-765794883524
The SetPrimaryStack and InitializePrimaryStack macros are no longer
used now that we removed support for ArmPlatformGlobalVariableLib.
So remove the various versions of them.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19004 6f19259b-4bc3-4df7-8a09-765794883524
The BdsLib implementation under ArmPkg never references
gArmGlobalVariableGuid so it should not list it as a dependency.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18997 6f19259b-4bc3-4df7-8a09-765794883524
ArmPkg does not depend on ArmPlatformGlobalVariableLib, and this library
is about to be removed, so remove all mention of it from ArmPkg.dsc.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18983 6f19259b-4bc3-4df7-8a09-765794883524
As of SVN 15115, the PEI core needs a MigratePeiServicesTablePointer function.
Background: The ArmPkg variant of the PeiServicesTablePointerLib implements
the standard PEI Services table retrieval mechanism as defined in the
PI Specification Volume 1 section 5.4.4 using the TPIDRURW registers.
No special action is required on ARM to migrate the PEI Services table
pointer after main memory initialization but a function must be implemented
nonetheless.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18953 6f19259b-4bc3-4df7-8a09-765794883524
RVCT (the proprietary 32-bit ARM compiler) warns about Node potentially being
used uninitialized, so initialize it to NULL explicitly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18952 6f19259b-4bc3-4df7-8a09-765794883524
The way the v7 MMU code is invoked by the Xen port is somewhat of
a pathological case, since it describes its physical memory space
using a single cacheable region that covers the entire addressable
range. When clipping this region to the part that is 1:1 addressable,
we end up with a region of exactly 4 GB in size, which just exceeds
the range of the UINT32 variable we use in FillTranslationTable() to
track our progress while populating the page tables. So promote it
to UINT64 instead.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18930 6f19259b-4bc3-4df7-8a09-765794883524
It is implied that the memory returned from UncachedMemoryAllocationLib
should have cache invalidated. So we invalidate memory range after
changing memory attribute to uncached.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18920 6f19259b-4bc3-4df7-8a09-765794883524
In ArmLib, there exists an alias for ArmDataSynchronizationBarrier,
named after one of several names for the pre-ARMv6 cp15 operation that
was formalised into the Data Synchronization Barrier in ARMv6.
This alias is also the one called from within ArmLib, in preference of
the correct name. Through the power of code reuse, this name slipped
into the AArch64 variant as well.
Expunge it from the codebase.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18915 6f19259b-4bc3-4df7-8a09-765794883524
We currently rely on .align directives to ensure that each exception
vector entry is the appropriate offset from the vector base address.
This is slightly fragile, as were an entry to become too large (greater
than 32 A64 instructions), all following entries would be silently
shifted until they meet the next alignment boundary. Thus we might
execute the wrong code in response to an exception.
To prevent this, introduce a new macro, VECTOR_ENTRY, that uses .org
directives to position each entry at the precise required offset from
the base of a vector. A vector entry which is too large will trigger a
build failure rather than a runtime failure which is difficult to debug.
For consistency, the base and end of each vector is similarly annotated,
with VECTOR_BASE and VECTOR_END, which provide the necessary alignment
and symbol exports. The now redundant directives and labels are removed.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18904 6f19259b-4bc3-4df7-8a09-765794883524
As EDK2 runs in an idmap, we do not use TTBR1_EL1, nor do we configure
it. TTBR1_EL1 may contain UNKNOWN values if it is not programmed since
reset.
Prior to enabling the MMU, we do not set TCR_EL1.EPD1, and hence the CPU
may make page table walks via TTBR1_EL1 at any time, potentially using
UNKNOWN values. This can result in a number of potential problems (e.g.
the CPU may load from MMIO registers as part of a page table walk).
Additionally, in the presence of Cortex-A57 erratum #822227, we must
program TCR_EL1.TG1 == 0b1x (e.g. 4KB granule) regardless of the value
of TCR_EL1.EPD1, to ensure that EDK2 can make forward progress under a
hypervisor which makes use of PAR_EL1.
This patch ensures that we program TCR_EL1.EPD1 and TCR_EL1.TG1 as above
to avoid these issues. TCR_EL1.TG1 is set to 4K for all targets, as any
CPU capable of running EDK2 must support this granule, and given
TCR_EL1.EPD1, programming the field is not detrimental in the absence of
the erratum.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18903 6f19259b-4bc3-4df7-8a09-765794883524
The ARM_MEMORY_REGION_DESCRIPTOR array provided by the platform may
contain entries that extend beyond the 4 GB boundary, above which
we can't map anything on 32-bit ARM. If this is the case, map only
the 1:1 addressable part.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18900 6f19259b-4bc3-4df7-8a09-765794883524
Bits 0 and 6 of the TTBRx system registers have different meanings
depending on whether a system implements the Multiprocessing
Extensions. So use separate memory attribute definitions for MP and
non-MP.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18899 6f19259b-4bc3-4df7-8a09-765794883524
The definition of TTBR_NON_INNER_CACHEABLE should be bit 0 cleared, not
bit 0 set. Furthermore, the name is inconsistent with the other definitions
so rename it to TTBR_INNER_NON_CACHEABLE.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18898 6f19259b-4bc3-4df7-8a09-765794883524
Even though mapping normal memory (inner) shareable is usually the
correct choice on coherent systems, it may be desirable in some cases
to use non-shareable mappings for normal memory, e.g., when hardware
managed coherency is not required and the memory system is not fully
configured yet. So introduce a PCD PcdNormalMemoryNonshareableOverride
that makes cacheable mappings of normal memory non-shareable.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18897 6f19259b-4bc3-4df7-8a09-765794883524
To align with the way normal cacheable memory is mapped, set the
shareable bit for cached accesses performed by the page table walker.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18896 6f19259b-4bc3-4df7-8a09-765794883524
Some MMU manipulation is dependent on the presence of the multiprocessing
extensions. So add a function that returns this information.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18895 6f19259b-4bc3-4df7-8a09-765794883524
Implement an accessor function for the ID_MMFR0 system register, which
contains information about the VMSA implementation. We will need this
to access the number of shareability levels and the nature of their
implementations.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18894 6f19259b-4bc3-4df7-8a09-765794883524
The definition TTBR_WRITE_THROUGH_NO_ALLOC makes little sense, since
a) its meaning is unclear in the context of TTBRx, since write through
always implies Read-Allocate and no Write-Allocate
b) its definition equals the definition of TTBR_WRITE_BACK_ALLOC
So instead, rename it to TTBR_WRITE_THROUGH and update the definition
to reflect the name.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18893 6f19259b-4bc3-4df7-8a09-765794883524
To prevent speculative intruction fetches from MMIO ranges that may
have side effects on reads, the architecture requires device mappings
to be created with the XN or UXN/PXN bits set (for the ARM/EL2 and
EL1&0 translation regimes, respectively.)
Note that, in the ARM case, this involves moving all accesses to a
client domain since permission attributes like XN are ignored from
a manager domain. The use of a client domain is actually mandated
explicitly by the UEFI spec.
Reported-by: Heyi Guo <heyi.guo@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18891 6f19259b-4bc3-4df7-8a09-765794883524
The function GcdAttributeToArmAttribute() is not used anywhere in the
code base, and is only defined for AARCH64 and not for ARM. It also
fails to set the bits for shareability and non-executability that we
require for correct operation. So remove it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18888 6f19259b-4bc3-4df7-8a09-765794883524
We force alignment to 2K after generating the DebugAgentVectorTable
symbol, and hence DebugAgentVectorTable itself may not be 2K-aligned,
and table entries may not be at the correct offset from the
DebugAgentVectorTable base address.
Fix this by forcing alignment before generating the
DebugAgentVectorTable symbol.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18865 6f19259b-4bc3-4df7-8a09-765794883524
Mark all cached memory mappings as shareable (or inner shareable on
AArch64) so that our view of memory is kept coherent by the hardware.
This is relevant for things like coherent DMA and virtualization (where
a guest may migrate to another core) but in general, since UEFI on ARM
is mostly used in a context where the secure firmware and possibly a
secure OS are already up and running, it is best to refrain from using
any non-shareable mappings.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18778 6f19259b-4bc3-4df7-8a09-765794883524
When allocating memory to perform non-coherent DMA, use the cache
writeback granule rather than the data cache linesize for alignment.
This prevents the explicit cache maintenance from corrupting
unrelated adjacent data if the cache writeback granule exceeds
the cache linesize.
Reported-by: Mark Rutland <mark.rutland@arm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18759 6f19259b-4bc3-4df7-8a09-765794883524
Add a function to ArmLib that provides access to the Cache Writeback
Granule (CWG) field in CTR_EL0. This information is required when
performing non-coherent DMA.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18758 6f19259b-4bc3-4df7-8a09-765794883524
The ARM architecture provides no reliable way to clean or invalidate
the entire data cache at runtime. The reason is that such maintenance
requires the use of set/way maintenance operations, which are suitable
only for the kind of maintenance that is carried out when the cache is
taken offline entirely.
So ASSERT () when any of the CacheMaintenanceLib whole data cache routines
are invoked rather than pretending we can do anything meaningful here.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18756 6f19259b-4bc3-4df7-8a09-765794883524
There is no need to issue a full data synchronization barrier and an
instruction synchronization barrier after each and every set/way or
MVA cache maintenance operation. For the set/way case, we can simply
remove them, since the set/way outer loop already issues the required
barriers after completing its traversal over all the cache levels.
For the MVA case, move the data synchronization barrier out of the
loop, and add the instruction synchronization barrier to the I-cache
invalidation by MVA routine.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18755 6f19259b-4bc3-4df7-8a09-765794883524
The stride used by the cache maintenance by MVA instructions should
be retrieved from CTR_EL0.DminLine and CTR_EL0.IminLine, whose values
reflect the actual geometry of the caches. Using CCSIDR for this purpose
violates the architecture.
Also, move the line length accessors to common code, since there is no
need to keep them separate between ARMv7 and AArch64.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18754 6f19259b-4bc3-4df7-8a09-765794883524
The ARM architecture does not allow the actual geometries of the caches
to be inferred from the CCSIDR cache info system register, since the
geometry it reports is intended for performing cache maintenance by
set/way and nothing else. Since the ArmLib cache info routines are
based solely on CCSIDR contents, they should not be used.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18753 6f19259b-4bc3-4df7-8a09-765794883524