Currently, any range passed to CpuArchProtocol::SetMemoryAttributes is
fully broken down into page mappings if the start or the size of the
region happens to be misaliged relative to the section size of 1 MB.
This is going to result in memory being wasted on second level page tables
when we enable strict memory permissions, given that we remap the entire
RAM space non-executable (modulo the code bits) when the CpuArchProtocol
is installed.
So refactor the code to iterate over the range in a way that ensures
that all naturally aligned section sized subregions are not broken up.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
To prevent the initial MMU->GCD memory space map synchronization from
stripping permissions attributes [which we cannot use in the GCD memory
space map, unfortunately], implement the same approach as x86, and ignore
SetMemoryAttributes() calls during the time SyncCacheConfig() is in
progress. This is a horrible hack, but is currently the only way we can
implement strict permissions on arbitrary memory regions [as opposed to
PE/COFF text/data sections only]
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This removes the PCD PcdArmUncachedMemoryMask from ArmPkg, along with
any remaining references to it in various platform .DSC files. It is
no longer used now that we removed the virtual uncached pages protocol
and the associated DebugUncachedMemoryAllocationLib library instance.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Virtual uncached pages are simply pages that are aliased using mismatched
attributes, which is not allowed by the ARM architecture. So remove the
protocol and its implementation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The debug implementation of the UncachedMemoryAllocationLib library
class relies on the creation of an uncached alias of a memory range,
while keeping the original cached mapping, but with read-only attributes
to trap inadvertent write accesses.
This is not a terribly good idea, given that the ARM architecture does
not allow mismatched attributes, and so creating them deliberately is
not something we should encourage by doing it in reference code.
So remove the library, and replace all references to it with a reference
to the non-debug version (unless the platform does not require a resolution
for it in the first place, in which case all UncachedMemoryAllocationLib
references can be removed altogether).
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Enable the hardware stack alignment check, as mandated by the UEFI spec.
This ensures that the stack pointer is 16 byte aligned at each instance
where it is used as the base address in a load/store operation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In preparation of enabling stack alignment checking, which is mandated
by the UEFI spec for AARCH64, add the code to manage this bit to ArmLib.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Stack and unstack the frame pointer according to the AAPCS in
AArch64AllDataCachesOperation ().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Since the new DXE page protection for PE/COFF images may invoke
EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes() with only permission
attributes set, add support for this in the AARCH64 MMU code.
Move the EFI_MEMORY_CACHETYPE_MASK macro to a shared location between
CpuDxe and ArmMmuLib so we don't have to introduce yet another
definition.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Currently, we have not implemented support on 32-bit ARM for managing
permission bits in the page tables. Since the new DXE page protection
for PE/COFF images may invoke EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes()
with only permission attributes set, let's simply ignore those for now.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The single user of EfiAttributeToArmAttribute () is the protocol
method EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes(), which uses the
return value to compare against the ARM attributes of an existing mapping,
to infer whether it is actually necessary to change anything, or whether
the requested update is redundant. This saves some cache and TLB
maintenance on 32-bit ARM systems that use uncached translation tables.
However, EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes() may be invoked with
only permission bits set, in which case the implied requested action is to
update the permissions of the region without modifying the cacheability
attributes. This is currently not possible, because
EfiAttributeToArmAttribute () ASSERT()s [on AArch64] on Attributes arguments
that lack a cacheability bit.
So let's simply return TT_ATTR_INDX_MASK (AArch64) or
TT_DESCRIPTOR_SECTION_TYPE_FAULT (ARM) in these cases (or'ed with the
appropriate permission bits). This way, the return value is equally
suitable for checking whether the attributes need to be modified, but
in a way that accommodates the use without a cacheability bit set.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Current Arm CpuDxe driver uses EFI_MEMORY_WP for write protection,
according to UEFI spec, we should use EFI_MEMORY_RO for write protection.
The EFI_MEMORY_WP is the cache attribute instead of memory attribute.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
PcdGenericWatchdogControlBase & PcdGenericWatchdogRefreshBase
are declared as UINT32 values in ArmPkg.dec, but for platforms
with addresses in the memory range above 4GB this causes build
error F000: Too large PCD value for datum type [UINT32]
of PCD gArmTokenSpaceGuid.PcdGenericWatchdogControlBase
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Alexei Fedorov <alexei.fedorov@arm.com>
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=361
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This reverts commit d32702d2c2.
Using a pool allocation for the root translation table seemed like
a good idea at the time, but as it turns out, such allocations are
handled in a way that makes them unsuitable for this purpose: they
are backed by HOBs that don't remain in the same place during the
various PI phase changes, which means the address programmed into
the TTBR register is no longer valid, and may refer to memory that
is reported as available to the OS.
So switch back to using a page based allocation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The generic timer support libraries call the actual system register
accessor function via a single pair of functions ArmArchTimerReadReg()
and ArmArchTimerWriteReg(), which take an enum argument to identify
the register, and return output values by pointer reference.
Since these functions are never called with a non-immediate argument,
we can simply replace each invocation with the underlying system register
accessor instead. This is mostly functionally equivalent, with the
exception of the bounds check for the enum (which is pointless given the
fact that we never pass a variable), the check for the presence of the
architected timer (which only makes sense for ARMv7, but is highly unlikely
to vary between platforms that are similar enough to run the same firmware
image), and a check for enum values that refer to the HYP view of the timer,
which we never referred to anywhere in the code in the first place.
So get rid of the middle man, and update the ArmGenericTimerPhyCounterLib
and ArmGenericTimerVirtCounterLib implementations to call the system
register accessors directly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Commit 0a99a65d2c ("fix incorrect device address of double buffer")
retained an explicit cast on the variable "Buffer" which became
incorrect with the other changes, leading to compilation failures
with some toolchains. Drop the cast.
Contributed-under: TianoCore Contribution Agreement 1.0
Reported-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Some devices, such as the Raspberry Pi3, have a fixed offset between memory
addresses as seen by the host and as seen by the other bus masters. So add
a new PCD that allows this fixed offset to be recorded, and to be used when
returning device addresses from the DmaLib mapping routines.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In preparation of adding support to ArmDmalib for DMA bus masters whose
view of memory is offset by a constant compared to the CPU's view, clean
up some abuse of the device address.
The device address is not defined in terms of the CPU's address space,
and so it should not be used in CopyMem () or cache maintenance operations
that require a valid mapping. This not only applies to the above use case,
but also to the DebugUncachedMemoryAllocationLib that unmaps the
primary, cached mapping of an allocation, and returns a host address
which is an uncached alias offset by a constant.
Since we should never access the device address from the CPU, there is
no need to record it in the MAPINFO struct. Instead, record the buffer
address in case of double buffering, since we do need to copy the contents
(in case of a bus master write) and free the buffer (in all cases) when
DmaUnmap() is called.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
If double buffering is not required in DmaMap(), the returned device
address is passed through ConvertToPhysicalAddress () to convert the
host address (which in case of DebugUncachedMemoryAllocationLib is not
1:1 mapped) to a physical address, which is what a device would expect
to be able to perform DMA.
By the same reasoning, a double buffer allocated using DmaAllocateBuffer ()
should be converted in the same way, considering that the buffer is allocated
using UncachedAllocatePages (), to which the above equally applies.
So add the missing ConvertToPhysicalAddress () invocation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Instead of depending on ArmLib to retrieve the CWG directly, use
the DMA buffer alignment exposed by the CPU arch protocol. This
removes our dependency on ArmLib, which makes the library a bit
more architecture independent.
While we're in there, rename gCpu to mCpu to better reflect its
local scope, and reflow some lines that we're modifying anyway.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Translation table walks are always cache coherent on ARMv8-A, so cache
maintenance on page tables is never needed. Since there is a risk of
loss of coherency when using mismatched attributes, and given that memory
is mapped cacheable except for extraordinary cases (such as non-coherent
DMA), restrict the page table walker to performing cacheable accesses to
the translation tables.
For DEBUG builds, retain some of the logic so that we can double check
that the memory holding the root translation table is indeed located in
memory that is mapped cacheable.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The LinuxLoader application boots Linux in a way that prevents the OS
from accessing UEFI runtime services. Since we have better ways now
of invoking the kernel (via GRUB, or directly via the kernel's UEFI
stub), remove the obsolete LinuxLoader so that people will no longer
mistake it for a suitable reference of how to invoke the OS from UEFI.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
The DmaBufferAlignment currently defaults to 4, which is dangerously
small and may result in lost data on platforms that perform non-coherent
DMA. So instead, take the CWG value from the cache info registers.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This is ancient cruft that is no longer used, so remove it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The GCC ARM builds have access to ADRL/LDRL macros that emit relative
symbol references, i.e., references that do not require fixing up at
load time (or FV generation time for XIP modules)
Implement equivalent functionality for RVCT: note that this does not
use movw/movt pairs, but the more compatible add/add/add or add/add/ldr
sequences (which Clang does not support, unfortunately, hence the use
of movw/movt for the GCC toolchain family)
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Define DISABLE_NEW_DEPRECATED_INTERFACES on the compiler command line by
default, to prevent deprecated interfaces from being used in core EDK2
code.
Bug: https://bugzilla.tianocore.org/show_bug.cgi?id=164
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Drop the include of AsmMacroIoLib.h, which contains GCC preprocessor macros
that RVCT does not use or require, given it has its own AsmMacroIoLib.inc
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
ArmPkg.dsc was a bit out of date, and some modules added over the past
years had not been added to its [Components] section yet.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This missing dependency has gone unnoticed until now, but it is breaking
the Omap35xxPkg.dsc build.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
AsciiStrCat() is deprecated / disabled under the
DISABLE_NEW_DEPRECATED_INTERFACES feature test macro.
The caller of CpsrString() is required to pass in "ReturnStr" with 32
CHAR8 elements. (DefaultExceptionHandler() complies with this.) "Str" is
used to build "ReturnStr" gradually. Just before calling AsciiStrCat(),
"Str" points to the then-terminating NUL character in "ReturnStr".
The difference (Str - ReturnStr) gives the number of non-NUL characters
we've written thus far, hence (32 - (Str - ReturnStr)) yields the number
of remaining bytes in ReturnStr, including the ultimately terminating NUL
character.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Michael Zimmermann <sigmaepsilon92@gmail.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=164
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=165
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
AsciiStrCat() is deprecated / disabled under the
DISABLE_NEW_DEPRECATED_INTERFACES feature test macro.
The "Str" variable serves no particular purpose in the MRegList() and
ThumbMRegList() functions; replace it with the pointed-to "mMregListStr" /
"mThumbMregListStr" global variable (as appropriate), so that the new
AsciiStrCatS() calls are as clear as possible.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Michael Zimmermann <sigmaepsilon92@gmail.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=164
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=165
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
All users have moved to the generic or accelerated versions in MdePkg,
so remove the obsolete BaseMemoryLibStm.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
During Mmu initialization in the CpuDxe, for a page table any bits set
in the 'NextSectionAttributes' are garbage and were set from bits that
are actually part of the pagetable address. We clear it out to zero
so that the SyncCacheConfigPage will use the page attributes instead
of trying to convert the (bogus) section attributes into page
attributes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Kurt Kennett <kurt.kennett@microsoft.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Without an explicit .align directive, the Clang assembler defaults to
no alignment, which may result in instructions appearing misaligned in
the final executable. So use word alignment in all cases.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
As reported by Eugene, the practice of sizing the address space in the
virtual memory system based on the maximum address in the table passed
to ArmConfigureMmu() is problematic, since it fails to take into account
the fact that the GCD memory space may be extended at a later time, both
for memory and for MMIO. So instead, choose the VA size identical to the
GCD memory map size, which is based on PcdPrePiCpuMemorySize on ARM
systems.
Reported-by: Eugene Cohen <eugene@hp.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Currently, we allocate a full page for the root translation table, even
if the configured translation only requires two entries (16 bytes) for
the root level, which happens to be the case for a 40 bit VA. Likewise,
for a 36-bit VA space, the root table only needs 16 entries of 8 bytes
each, adding up to 128 bytes.
So switch to a pool allocation for the root table if we can, but take into
account that the architecture requires it to be naturally aligned to its
size, i.e., a 64 byte table requires 64 byte alignment, whereas pool
allocations in general are only guaranteed to be aligned to 8 bytes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In commit 7d189f99d8 ("ArmPkg/Mmu: Fix bug of aligning new allocated
page table"), we fixed a flaw in the logic regarding alignment of newly
allocated translation table pages. However, we all failed to spot that
aligning page based allocations to page size is rather pointless to
begin with, so simply allocate a single page each time we add new pages
to the translation tables.
Also, drop the unnecessary cast.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The relations between T0SZ, the number of translation levels and the
size/alignment of the root table can be expressed in simple arithmetic
expressions, so get rid of the lookup table.
Note that this disregards the fact that the maximum value of T0SZ is
39 not 42 (as one would expect for the smallest VA size using 2 levels)
but since this corresponds to a VA size of 32 MB and 4 MB, respectively,
neither of which are sufficient to run UEFI, we can safely ignore the
distinction.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The ArmGicLib API function GicGetCpuRedistributorBase () declares
GicCpuRedistributorBase to iterate over the redistributors of all
CPUs, but then inadvertently advances GicRedistributorBase instead.
Reported-by: "Oliyil Kunnil, Vishal" <vishalo@qti.qualcomm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
As reported by Vishal, the new backtrace output would be more useful if
it did not contain the full absolute path of each module in the list.
So strip off everything up to the last forward slash or backslash in the
string.
Example output:
IRQ Exception at 0x000000005EF110E0
DxeCore.dll loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EF121F0) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EF1289C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFB6B4) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFAA44) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFB450) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF938C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF8D04) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFA8E8) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF3C14) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF3E48) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EF0C838) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEEF70C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEEE93C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEEE024) loaded at 0x000000005EEED000
Suggested-by: "Oliyil Kunnil, Vishal" <vishalo@qti.qualcomm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
For historical reasons, the files under ArmLib are split up into 'common'
files under Common/, containing common C files as well as AArch64 and Arm
specific asm files, and ArmV7 and AArch64 files under ArmV7/ and AArch64/,
respectively. This presumably dates back to the time when ArmLib supported
different revisions of the 32-bit architecture (i.e., pre-V7)
Since the PI spec requires V7 or later, we can simplify this to Arm/ and
AArch64, which aligns ArmLib with the majority of other modules that carry
ARM or AArch64 specific code.
So move the files around so that shared files live at the same level as
ArmBaseLib.inf, and ARM/AArch64 specific files live in Arm/ or AArch64/,
respectively.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The ArmBaseLib timer code does not depend on MemoryAllocationLib at
all, so remove the #includes referring to it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This removes the following ArmLib implementation, which were, apart from
the fact that they targeted either ARM or AARCH64, fully identical:
ArmPkg/Library/ArmLib/AArch64/AArch64Lib.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibPei.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibPrePi.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibSec.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7Lib.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7LibPrePi.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7LibSec.inf
Only ArmBaseLib remains, which can fulfil the dependencies upon each of
the listed flavors.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Introduce a new ArmLib version ArmBaseLib, which encapsulates the ARM
version ArmV7Lib and the AArch64 version AArch64Lib.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Remove the NULL instance of ArmLib: it is not currently used, and its
usefulness its dubious.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
According to the ACPI 6.0/6.1 spec, the physical base address of GICC,
GICD, GICR and GIC ITS is 64-bit. So change the type of the various GIC
base address PCDs to 64-bit, and fix up all users.
Contributed-under: TianoCore Contribution Agreement 1.0
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Dennis Chen <dennis.chen@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
When dumping the CPU state after an unhandled fault, walk the stack
frames and decode the return addresses so we can show a minimal
backtrace. Unfortunately, we do not have sufficient information to
show the function names, but at least we can see the modules and the
return addresses inside the modules.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Clang does not like separate definitions for the __alias__ and the
__weak__ attributes, so merge the definitions into one.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
After the recent update of CompilerIntrinsicsLib, our memset() is no
longer emitted as a weak symbol. On ARM, this may cause problems when
combining this library with another library that supplies memset() [e.g.,
CryptoPkg/IntrinsicLib], due to the fact that the object also supplies
the __aeabi_memXXX entry points, which can only be satisfied by this
object. So make our memset() weak again, to let the other implementation
take precedence.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
BaseMemoryLib has recently been extended with an API function
IsZeroBuffer(), so copy the default implementation into BaseMemoryLibStm
as well.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
BaseMemoryLib has recently been extended with an API function
IsZeroGuid(), so copy the default implementation into BaseMemoryLibStm
as well.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The BaseMemoryLibVstm implementation of BaseMemoryLib is ARM only, uses
the NEON register file despite the fact that the UEFI spec does not allow
it, and is currently not used anywhere. So remove it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This replaces the various implementations of memset and memcpy,
including the ARM RTABI ones (__aeabi_mem[set|clr]_[|4|8]) with
a single C implementation for each. The ones we have are either not
very sophisticated (ARM), or they are too sophisticated (memcpy() on
AARCH64, which may perform unaligned accesses) or already coded in C
(memset on AArch64).
The Tianocore codebase mandates the explicit use of its SetMem() and
CopyMem() equivalents, of which various implementations exist for use
in different contexts (PEI, DXE). Few compiler generated references to
these functions should remain, and so our implementations in this BASE
library should be small and usable with the MMU off.
So replace them with a simple C implementation that builds correctly
on GCC/AARCH64, CLANG/AARCH64, GCC/ARM, CLANG/ARM and RVCT/ARM.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections. Note that in some cases, various entry points
refer to different parts of the same routine, so in those cases,
the files have been left untouched.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The C language is powerful enough to implement a function that does
absolutely nothing, so there is no need to resort to implementations
in assembler for various toolchains/architectures.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This introduces the ASM_FUNC() macro to annotate function entry points
in assembler files. This allows us to add additional metadata that
marks a function entry point as a function, and allows us to emit
a .section directive for each function, which makes it possible for
the linker to drop unreferenced code.
In addition, introduce a couple of utility macros that we can use to
clean up the code.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This removes the various Mmio ASM macros that are not used anywhere in
the code, and removes some variants of LoadConstant... () that are not
used anywhere either.
Note that these MmioXxx() implementations are unrelated to the C versions
defined in MdePkg. These are strictly intended for use in assembler, and
no such uses remain.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The function ArmReplaceLiveTranslationEntry() has been moved to
ArmMmuLib, so remove the old implementation from ArmLib.
Note that the new implementation was not exported from the object file,
and so references to it were satisfied by the old version residing in
ArmLib. Since we are removing that one, we need to export the new one
at the same time to prevent the linker from bailing with undefined
reference errors.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This commit fixes a bug in the GIC v2 and v3 drivers where the GICC_EOIR
(End Of Interrupt Register) is written twice for a single interrupt.
GicV(2|3)IrqInterruptHandler() calls the Interrupt Handler and then
GicV(2|3)EndOfInterrupt() on exit:
InterruptHandler = gRegisteredInterruptHandlers[GicInterrupt];
if (InterruptHandler != NULL) {
// Call the registered interrupt handler.
InterruptHandler (GicInterrupt, SystemContext);
} else {
DEBUG ((EFI_D_ERROR, "Spurious GIC interrupt: 0x%x\n", GicInterrupt));
}
GicV2EndOfInterrupt (&gHardwareInterruptV2Protocol, GicInterrupt);
although gInterrupt->EndOfInterrupt() can be expected to have already
been called by InterruptHandler() [which is the case for the primary
in-tree handler in TimerDxe]
The fix moves the EndOfInterrupt() call inside the else case for
unregistered/spurious interrupts. This removes a potential race
condition that might have lost interrupts.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Alexei Fedorov <alexei.fedorov@arm.com>
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The ARM compiler intrinsics library defines __aeabi_memset() and
memset() in the same object, which means that both will be pulled
in if either is referenced.
The IntrinsicLib in CryptoPkg defines its own, preferred memset(),
which may clash with our memset(). So make our version weak.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Building ArmSoftFloatLib with LTO results in errors like
.../bin/ld: softfloat.obj: plugin needed to handle lto object
.../bin/ld: __aeabi_dcmpge.obj: plugin needed to handle lto object
.../bin/ld: __aeabi_dcmplt.obj: plugin needed to handle lto object
.../bin/ld: internal error ../../ld/ldlang.c 6299
This library is only linked by OpensslLib at the moment, and only
marginally used at runtime, so just disable LTO for it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
GCC in LTO mode interoperates poorly with non-standard libraries that
provide implementations of compiler intrinsics such as memcpy/memset
or the stack protector entry points. Such libraries need to be built
in non-LTO mode, and then referenced explicitly on the linker command
line using a -plugin-opt=-pass-through=-lxxx linker option.
However, if these intrinsics are also referenced directly, the LTO
version of the code will be pulled in, and will happily satisfy all
other references to the same symbol.
So add a pair of glue libraries, for ARM and AARCH64, that reference
the known intrinsics. Since the binaries live under ArmPkg directly,
we can reference them in tools_def.txt. Under LD garbage collection,
the object itself will be pruned, and so will the intrinsics that end
up unused by the module.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
ArmLib defines a prototype for the ArmReadSctlr() function, but the
AArch64 implementation is missing. So add it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: John Powell <john.powell@arm.com>
Signed-off-by: Supreeth Venkatesh <supreeth.venkatesh@arm.com>
[ardb: update commit log]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Add the Cortex-A72 CPU type which is used in JunoR2.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Unlike SGIs and PPIs, which are private to the CPU and are managed at
the redistributor level (which is also a per-CPU construct), shared
interrupts (SPIs) are shared between all CPUs, and therefore managed at
the distributor level (just as on GICv2).
Reported-by: Narinder Dhillon <ndhillonv2@gmail.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Commit fafb7e9c11 ("ArmPkg: correct TTBR1_EL1 settings in TCR_EL1")
introduced a symbolic constant TCR_TG1_4KB which resolves to (2 << 30),
and ORs it into the value to be written into TCR_EL1 (if executing at
EL1). Since the constant is implicitly typed as signed int, and has the
sign bit set, the promotion that occurs when casting to UINT64 results
in a TCR value that has bits [63:32] all set, which includes mostly
RES0 bits but also the TBIn, AS and IPS fields.
So explicitly redefine all TCR related constants as 'unsigned long'
types, using the UL suffix. To avoid confusion in the future, the
inappropriately named VTCR_EL23_xxx constants have the leading V
removed, and the actual VTCR_EL2 related constants are dropped, given
that we never configure stage 2 translation in UEFI.
Reported-by: Vishal Oliyil Kunnil <vishalo@qti.qualcomm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
This introduces a special version of ArmMmuLib for PEIMs that takes care
only to perform cache maintenance on the live entry replacement routine
if the module is not executing in place. Not only is such cache maintenance
unnecessary in that case, it may be actively harmful on some systems that
fail to tolerate cache maintenance operations on NOR flash regions.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Switch all users of ArmLib that depend on the MMU routines to the new,
separate ArmMmuLib. This needs to occur in one go, since the MMU
routines are removed from ArmLib build at the same time, to prevent
conflicting symbols.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This base library encapsulates the MMU manipulation routines that have been
factored out of ArmLib. The functionality covers initial creation of the 1:1
mapping in the page tables, and remapping regions to change permissions or
cacheability attributes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Introduce the library class ArmMmuLib, which encapsulates the functionality
to set up and modify page table entries.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
SErrors (formerly called asynchronous aborts) are a distinct class of
exceptions that are not closely tied to the currently executing
instruction. Since execution may be able to proceed in such a condition,
this class of exception is masked by default, and software needs to unmask
it explicitly if it is prepared to handle such exceptions.
On DEBUG builds, we are well equipped to report the CPU context to the user
and it makes sense to report an SError as soon as it occurs rather than to
wait for the OS to take it when it unmasks them, especially since the current
arm64/Linux implementation simply panics in that case. So unmask them when
ArmCpuDxe loads.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Putting DEBUG () code after an ASSERT (FALSE) statement is not very
useful, since the code will be unreachable on DEBUG builds and compiled
out on RELEASE builds. So move the ASSERT () statement after it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reassign all interrupts to non-secure Group-1 if the GIC has its DS
(Disable Security) bit set. In this case, it is safe to assume that we
own the GIC, and that no other firmware has performed any configuration
yet, which means it is up to us to reconfigure the interrupts so they
can be taken by the non-secure firmware.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
On some platforms, performing cache maintenance on regions that are backed
by NOR flash result in SErrors. Since cache maintenance is unnecessary in
that case, create a PEIM specific version that only performs said cache
maintenance in its constructor if the module is shadowed in RAM. To avoid
performing the cache maintenance if the MMU code is not used to begin with,
check that explicitly in the constructor.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This implements the platform glue for the new generic BDS implementation.
It is based on the ArmVirtQemu version, with the QEMU references removed.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Instead of cleaning the data cache to the PoU by virtual address and
subsequently invalidating the entire I-cache, invalidate only the
range that we just cleaned. This way, we don't invalidate other
cachelines unnecessarily.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
When we split a block entry into a table entry, the UXN/PXN/XN permission
attributes are inherited both by the new table entry and by the new block
entries at the next level down. Unlike the NS bit, which only affects the
next level of lookup, the XN table bits supersede the permissions of the
final translation, and setting the permissions at multiple levels is not
only redundant, it also prevents us from lifting XN restrictions on a
subregion of the original block entry by simply clearing the appropriate
bits at the lowest level.
So drop the code that sets the UXN/PXN/XN bits on the table entries.
Reported-by: "Oliyil Kunnil, Vishal" <vishalo@qti.qualcomm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
DmaMap () only allows uncached mappings to be used for creating consistent
mappings with operation type MapOperationBusMasterCommonBuffer. However,
if the buffer passed to DmaMap () happens to be aligned to the CWG, there
is no need for a bounce buffer, and we perform the cache maintenance
directly without ever checking if the memory attributes of the buffer
adhere to the API.
So add some debug code that asserts that the operation type and the memory
attributes are consistent.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In the DmaMap () operation, if the region to be mapped happens to be
aligned to the Cache Writeback Granule (CWG) (whose value is typically
64 or 128 bytes and 2 KB maximum), we remap the memory as uncached.
Since remapping memory occurs at page granularity, while the buffer and the
CWG may be much smaller, there is no telling what other memory we affect
by doing this, especially since the operation is not reverted in DmaUnmap().
So remove the remapping call.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
DmaMap () operations of type MapOperationBusMasterCommonBuffer should
return a mapping that is coherent between the CPU and the device. For
this reason, the API only allows DmaMap () to be called with this operation
type if the memory to be mapped was allocated by DmaAllocateBuffer (),
which in this implementation guarantees the coherency by using uncached
mappings on the CPU side.
This means that, if we encounter a cached mapping in DmaMap () with this
operation type, the code is either broken, or someone is violating the
API, but simply proceeding with a double buffer makes no sense at all,
and can only cause problems.
So instead, actively reject this operation type for cached memory mappings.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Comparing a GCD attribute field directly against EFI_MEMORY_UC and
EFI_MEMORY_WT is incorrect, since it may have other bits set as well
which are not related to the cacheability of the region. So instead,
test explicitly against the flags EFI_MEMORY_WB and EFI_MEMORY_WT,
which must be set if the region may be mapped with cacheable attributes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
We manage to use both an AND operation with 'gCacheAlignment - 1' and a
modulo operation with 'gCacheAlignment' in the same compound if statement.
Since gCacheAlignment is a global of which the compiler cannot guarantee
that it is a power of two, simply use the AND version in both cases.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The allocation function UncachedAllocatePages () may return NULL, in
which case our implementation of DmaAllocateBuffer () should return
EFI_OUT_OF_RESOURCES rather than silently ignoring the NULL value and
returning EFI_SUCCESS.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This adds a partial stack dump (256 bytes at either side of the stack
pointer) to the CPU state dumping routine that is invoked when taking an
unexpected exception. Since dereferencing the stack pointer may itself
fault, ensure that we don't enter the dumping routine recursively.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
The default exception handler, which is essentially the one that is invoked
for unexpected exceptions, ends with an ASSERT (FALSE), to ensure that
execution halts after dumping the CPU state. However, ASSERTs are compiled
out in RELEASE builds, and since we simply return to wherever the ELR is
pointing, we will not make any progress in case of synchronous aborts, and
the same exception will be taken again immediately, resulting in the string
'Exception at 0x....' to be printed over and over again.
So use an explicit deadloop instead.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
The CpuIo2 protocol is required by the generic PciHostBridgeDxe driver,
which relies on it to back its own I/O and MMIO operations.
Since ARM has no native I/O port equivalent, such accesses can only
originate from PCI drivers, and the PCI I/O space is translated to MMIO
in this case.
So we can implement this protocol using MMIO operations only, and take
the PCI I/O translation offset into account when performing I/O port
accesses.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The PCI related PCDs are not platform specific, and architectural
protocols such as CpuIo2 are based on PCI provided MMIO to IO
translation, so these PCDs belong in ArmPkg not ArmPlatformPkg.
NOTE: this *WILL* break some out-of-tree platforms, the fix is changing
all consumers of gArmPlatformTokenSpaceGuid.PcdPci* to
gArmTokenSpaceGuid.PcdPci*
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
mGicNumInterrupts is the total number of interrupts, so the interrupt
ID equal to mGicNumInterrupts is also invalid.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
On ARM, manipulating live page tables is cumbersome since the architecture
mandates the use of break-before-make, i.e., replacing a block entry with
a table entry requires an intermediate step via an invalid entry, or TLB
conflicts may occur.
Since it is not generally feasible to decide in the page table manipulation
routines whether such an invalid entry will result in those routines
themselves to become unavailable, use a function that is callable with
the MMU off (i.e., a leaf function that does not access the stack) to
perform the change of a block entry into a table entry.
Note that the opposite should never occur, i.e., table entries are never
coalesced into block entries.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Now XN attribute will be set automatically if the region is declared
as device memory. However, the function ArmMemoryAttributeToPageAttribute
is to get attribute for block and page descriptors, not for table
descriptors, so attribute TT_TABLE_*XN does not really take effect.
Need to use TT_*XN_MASK instead.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Some minor typographical problems were noticed during previous commits.
This change corrects those, and contains no functional modifications.
The changes are in comments, and one diagnostic message.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The TimerFreq variable in the TimerConstructor() is unused in RELEASE
builds since ASSERTs are then disabled.
The only use of the variable (in the ASSERT) is replaced by a direct
invocation of the function previously used to set it.
NOTE: The build tools suppress warnings of this using compiler options
eg. -Wno-unused-but-set-variable for GCC toolchain or
--diag_suppress=550 for RVCT toolchain.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
FirmwarePerformanceDxe.c utilizes the Timer Library function
GetTimeInNanoSecond() which was not implemented by the ArmArchTimerLib.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This refactors some timer code to define MultU64xN as a preprocessor
symbol rather than a function pointer, and to factor out the code that
obtains the timer frequency into GetPlatformTimerFreq ().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.0
[ard.biesheuvel: split off from 'add GetTimeInNanoSecond() to ArmArchTimerLib']
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The function ArmClearMemoryRegionReadOnly() was supposed to undo the
effect of ArmSetMemoryRegionReadOnly(), but instead, it sets the permissions
to EL0-no access, EL1-read-only. Since the EL0 bit should be 1 to align
with EL2/3 (where the bit is SBO), use TT_AP_RW_RW instead, which makes the
entry read-write for EL0 when executing at EL1, and read-write for all other
levels.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This replaces the somewhat opaque preprocessor based stack/unstack macros
with open coded ldp/stp sequences to preserve the interrupted context
before handing over to the exception handler in C.
This removes various arithmetic operations on the stack pointer, and
reduces the exception return critical section to its minimum size (i.e.,
the bare minimum required to populate the ELR and SPSR registers and invoke
the eret).
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
If we are using the vector table in place, there is no need to make an
indirect call to the common handler routine from the vector table entries,
so just use a straight branch instruction in that case.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
The global gArmRelocateVectorTable is a build time constant, but due to
its external linkage and lack of constness, the compiler does not see that.
So turn it into a static boolean, and at the same time, make the function
CopyExceptionHandlers() (which is only called if gArmRelocateVectorTable is
set) static as well, so that the compiler can eliminate it completely if
we are using the vector table in place.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
ESR and FAR are populated by the hardware upon exception entry, and
describe the exception, not the interrupted context. So there is no point
in restoring their values before returning from the exception.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
We have three code paths to stack/unstack the exception context, one for
each of EL3, EL2 and EL1. However, they all access the same copy of FPSR
so move that access to the common path.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Unlike the AArch32 vector table, which has room for a single instruction
for each exception type, the AArch64 exception table has 128 byte slots,
which can easily hold the shared prologues that are emitted out of line.
So refactor this code into a single macro, and expand it into each vector
table slot. Since the address of the command handler entry point is no
longer patched in by the C code, we can just emit the literal into each
vector entry directly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
The macros EL1_OR_EL2() and EL1_OR_EL2_OR_EL3() allow conditional execution
of assembly sequences based on the current exception level, by jumping to
caller supplied labels 1f, 2f or 3f. However, the jump to 1f is actually
a fallthrough, which means the EL1 code needs to follow right after the
macro invocation, and the 1f label is ignored.
So let's fix this by making all jumps explicit.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Use the new ARM/AArch64 implementation of the base
CpuExceptionHandlerLib library from CpuDxe to centralize
exception handling.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Introduce ARM and AArch64 instances of the CpuExceptionHandlerLib which
provides exception handling and registration of handlers regardless of
execution phase.
Two variants of the ArmExceptionLib are provided: one where exception
handlers reside within the module (meeting appropriate architectural
alignment requirements for the vector table) and another one that will
relocate a copy of thee xception handlers to an address specified by
PcdCpuVectorBaseAddress. The ArmRelocateExceptionLib is intended for use
in cases where ArmExceptionLib is too large for the application
(uncompressed XIP images) as driven by the vector table alignment padding.
The AArch64 build of this library supports execution at EL1, EL2, and EL3
exception levels.
Tested on ARM, and AArch64 with SEC, DXE Core, and CpuDxe modules.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Add ArmReadHcr() to ArmLib to enable read-modify-write of the HCR system
register.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Architecturally, the TTBCR register value is undefined at reset for
Non-Secure.
On some platforms the reset value for TTBCR is not zero and
this causes a data abort exception once the MMU is enabled.
This patch configures the TTBCR register to enable translation table
walk using TTBR0.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Update the CpuDxe driver to remove an assumption that it is the only
component modifying interrupt state since this can be done through BaseLib
as well. Instead of using a global variable for last interrupt state we
now check the current PSTATE value directly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The AArch64 DAIF bits are different for reading (mrs) versus writing
(msr). The bitmask definitions assumed they were the same causing
incorrect results when trying to determine the current interrupt
state through ArmGetInterruptState.
The logic for interpreting the DAIF read data using the csel instruction
was also incorrect and is fixed.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Modify the DefaultExceptionHandler (uefi-variant) so it can be used by
DxeCore (via CpuExceptionHandlerLib) where the debug info table is not
yet published at library constructor time.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Some updates to SCR can cause a problem which manifests as an undefined opcode exception.
This may be when a speculative secure instruction fetch happens after the NS bit is set.
An isb is required to make the register change take effect fully.
Contributed-under: Tianocore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <Evan.Lloyd@arm.com>
Reviewed-by: Sami Mujawar <Sami.Mujawar@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Problems have been encountered because some of the source files have
execute permission set. This can cause git to report them as changed
when they are checked out onto a file system with inherited permissions.
This has been seen using Cygwin, MinGW and PowerShell Git.
This patch makes no change to source file content, and only aims to
correct the file modes/permissions.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19778 6f19259b-4bc3-4df7-8a09-765794883524
The RVCT compiler may emit calls to the various __aeabi_c?cmp??
functions, which return their results via the CPU condition flags
C and Z. According to ARM doc IHI 0043D 'Run-time ABI for the ARM
architecture':
The 3-way comparison functions c*cmple, c*cmpeq and c*rcmple return
their results in the CPSR Z and C flags. C is clear only if the operands
are ordered and the first operand is less than the second. Z is set only
when the operands are ordered and equal.
Add implementations for the double and float variants of the above.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19327 6f19259b-4bc3-4df7-8a09-765794883524
Unfortunately, Clang does not support the use of symbol references in .org
directives, and bails with the following error message when it encounters
them:
<...>:error: expected assembly-time absolute expression
.org DebugAgentVectorTable + 0x000
So replace the .org arguments with absolute values, and move the whole
vector table into a subsection with the appropriate alignment, and
starting at .org 0x0. This gives the same protection with respect to
entries that exceed 128 bytes, in a way that Clang supports as well.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19303 6f19259b-4bc3-4df7-8a09-765794883524
Commit SVN r18778 made all mappings of normal memory (inner) shareable,
even on hardware that implements shareability as uncached accesses.
The original concerns that prompted the change, regarding coherent DMA
and virt guests migrating between CPUs, do not apply to such hardware,
so revert to the original behavior in that case.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19285 6f19259b-4bc3-4df7-8a09-765794883524
The -fno-tree-vrp option is not required for GCC 4.8 or later, and is not
supported by CLANG. So restrict its use to GCC 4.6 and 4.7, which are the
oldest versions we support for ARM.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19283 6f19259b-4bc3-4df7-8a09-765794883524
The open coded access to co-processor #10 to set FPEXC is not supported
by the CLANG assembler, but the architecturally correct VMSR instruction
is not supported by older binutils. So keep the former unless __clang__
is defined.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19282 6f19259b-4bc3-4df7-8a09-765794883524
CLANG for ARM may emit calls to __aeabi_memset(), which is subtly different
from the default memset() [arguments 2 and 3 are reversed]
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19281 6f19259b-4bc3-4df7-8a09-765794883524
The CLANG assembler does not support the legacy, non-unified assembler syntax,
i.e., it does not support the reordering of the condition suffixes with the
increment/decrement before/after or byte/word suffixes, and it does not
recognize the 'empty descending' (ED) suffix at all. So move to the unified
syntax, and replace 'empty descending' with 'decrement after' or 'increment
before' as appropriate.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19280 6f19259b-4bc3-4df7-8a09-765794883524
In the function ArmGicEnableDistributor (), the Affinity Routing Enable
(ARE) bit, which essentially defines whether the GIC runs in v2 or v3
mode, is inadvertently cleared when enabling the GIC distributor if it
is running in v3 mode. So fix that.
Reported-by: Supreeth Venkatesh <Supreeth.Venkatesh@arm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19274 6f19259b-4bc3-4df7-8a09-765794883524
Since we do not support anything below ARMv7, let's promote the ARMv6
exception handling code in CpuDxe to the only version we provide for
ARM. This means we can drop the unused ARMv4 version.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19273 6f19259b-4bc3-4df7-8a09-765794883524
This patch updates the ArmPkg variant of InvalidateInstructionCacheRange to
flush the data cache only to the point of unification (PoU). This improves
performance and also allows invalidation in scenarios where it would be
inappropriate to flush to the point of coherency (like when executing code
from L2 configured as cache-as-ram).
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Added AARCH64 and ARM/GCC implementations of the above.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19174 6f19259b-4bc3-4df7-8a09-765794883524
This has the effect of splitting assembly functions into their own sections
so the linker can remove unused ones to save space.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@gmail.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19109 6f19259b-4bc3-4df7-8a09-765794883524
In response to Leif's request earlier, this adds a new RVCT assembler
macro to centralize the exporting of assembly functions including the
EXPORT directive (so the linker can see it), the AREA directive (so
it's in its own section for code size reasons) and the function label
itself.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19098 6f19259b-4bc3-4df7-8a09-765794883524
In SVN 18756 ("disallow whole D-cache maintenance operations")
InvalidateInstructionCache was modified to remove the full data cache
clean but left the full instruction cache invalidate. The change was
made to address issues in the set/way clean methodology but the
resulting code could lead someone to a painful debug. If a component
called this function, the proper code would not be flushed to the PoU,
since the intent of this function is not only to invalidate the I-cache
but to provide coherency after code loading / modification. This change
simply places an ASSERT(FALSE) in this function to avoid this hazard.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19084 6f19259b-4bc3-4df7-8a09-765794883524
The ARM softfloat library in ArmSoftfloatLib currently does not build
under RVCT, simply because the code includes system header files that
RVCT does not provide. However, nothing exported by those include files
is actually used by the library when built in SOFTFLOAT_FOR_GCC mode,
so we can just drop all of them.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19031 6f19259b-4bc3-4df7-8a09-765794883524
In order to support software floating point in the context of
DXE drivers etc, this factors out the core ARM softfloat support
into a separate library.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19030 6f19259b-4bc3-4df7-8a09-765794883524