ARM ArmHvcLib looks like it was created from copy of ArmSmcLib which
looks like it was created from a copy of the AArch64 version.
Both of these files include AsmMacroIoLibV8.h instead of
AsmMacroIoLib.h, although since they only use macros that are identical
between the two, there was no functional issue caused by this.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
In order to be able to produce meaningful diagnostic output when taking
synchronous exceptions that have been caused by corruption of the stack
pointer, prepare the EL0 stack pointer and switch to it when handling the
'Sync exception using SPx' exception class.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Currently, we only attempt to walk the call stack and print a backtrace
if the program counter refers to a location covered by a PE/COFF image.
However, regardless of the value of PC, the frame pointer may still have
a meaningful value, and so we can still produce the remainder of the
backtrace.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Add the gEfiDebugImageInfoTableGuid, which is referenced in the code,
to both .INF files describing this module.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Replace the duplicated and outdated code in QuietBoot.c with a reference
to BootLogoLib, which provides the same functionality. This also allows
us to drop all references to IntelFrameworkModulePkg in this module.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Instead of indirecting the reference to the Shell binary via a PCD
that is defined in IntelFrameworkModulePkg, and which invariably
gets set to the same value by all users of this library, refer to
the UEFI Shell application by its declared symbolic GUID.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Commit e7b24ec978 ("ArmPkg/UncachedMemoryAllocationLib: map uncached
allocations non-executable") adds code that manipulates the GCD memory
space attributes of a newly allocated uncached region without checking
whether this region expose these attributes in its capabilities mask.
Given that the intent is to remove executable permissions from the region,
this is a fairly pointless exercise to begin with, regardless of whether
it is correct or not. The reason is that RO/XP memory attributes in the
GCD memory space map or the UEFI memory map are completely disconnected
from the actual mapping permissions used in the page tables.
So instead, invoke the CPU arch protocol directly, and add the non-exec
attributes in the page tables directly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
modsi3.S references the symbol '__divsi3' by '___divsi3' which assumes
the prefix is always required and supported. Use ASM_PFX() instead
to support all compilers.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com>
The primary use case for UncachedMemoryAllocationLib is non-coherent DMA,
which implies that such regions are not used to fetch instructions from.
So let's map them as non-executable, to avoid creating a security hole
when the rest of the platform may be enforcing strict memory permissions
on ordinary allocations.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Uncached pool allocations are aligned to the data cache line length under
the assumption that this is sufficient to prevent cache maintenance from
corrupting adjacent allocations. However, the value to use in such cases
is architecturally called the Cache Writeback Granule (CWG), which is
essentially the maximum Dcache line length rather than the minimum.
Note that this is mostly a cosmetical fix, given that the pool allocation
is turned into a page allocation later, and rounded up accordingly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In order to play nice with platforms that use strict memory permission
policies, restore the original mapping attributes when freeing uncached
allocations.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Now that we have the prerequisite functionality available in ArmMmuLib,
wire it up into ArmSetMemoryRegionNoExec, ArmClearMemoryRegionNoExec,
ArmSetMemoryRegionReadOnly and ArmClearMemoryRegionReadOnly. This is
used by the non-executable stack feature that is configured by DxeIpl.
NOTE: The current implementation will not combine RO and XP attributes,
i.e., setting/clearing a region no-exec will unconditionally
clear the read-only attribute, and vice versa. Currently, we
only use ArmSetMemoryRegionNoExec(), so for now, we should be
able to live with this.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
We no longer make use of the ArmMmuLib 'feature' to create aliased
memory ranges with mismatched attributes, and in fact, it was only
wired up in the ARM version to begin with.
So remove the VirtualMask argument from ArmSetMemoryAttributes()'s
prototype, and remove the dead code that referred to it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
... where it belongs, since AARCH64 already keeps it there, and
non DXE users of ArmMmuLib (such as DxeIpl, for the non-executable
stack) may need its functionality as well.
While at it, rename SetMemoryAttributes to ArmSetMemoryAttributes,
and make any functions that are not exported STATIC. Also, replace
an explicit gBS->AllocatePages() call [which is DXE specific] with
MemoryAllocationLib::AllocatePages().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The routines ArmConfigureMmu(), SetMemoryAttributes() [*] and the
various set/clear read-only/no-exec routines are declared as returning
EFI_STATUS in the respective header files, so align the definitions with
that.
* SetMemoryAttributes() is declared in the wrong header (and defined in
ArmMmuLib for AARCH64 and in CpuDxe for ARM)
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The debug implementation of the UncachedMemoryAllocationLib library
class relies on the creation of an uncached alias of a memory range,
while keeping the original cached mapping, but with read-only attributes
to trap inadvertent write accesses.
This is not a terribly good idea, given that the ARM architecture does
not allow mismatched attributes, and so creating them deliberately is
not something we should encourage by doing it in reference code.
So remove the library, and replace all references to it with a reference
to the non-debug version (unless the platform does not require a resolution
for it in the first place, in which case all UncachedMemoryAllocationLib
references can be removed altogether).
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Enable the hardware stack alignment check, as mandated by the UEFI spec.
This ensures that the stack pointer is 16 byte aligned at each instance
where it is used as the base address in a load/store operation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In preparation of enabling stack alignment checking, which is mandated
by the UEFI spec for AARCH64, add the code to manage this bit to ArmLib.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Stack and unstack the frame pointer according to the AAPCS in
AArch64AllDataCachesOperation ().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Since the new DXE page protection for PE/COFF images may invoke
EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes() with only permission
attributes set, add support for this in the AARCH64 MMU code.
Move the EFI_MEMORY_CACHETYPE_MASK macro to a shared location between
CpuDxe and ArmMmuLib so we don't have to introduce yet another
definition.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Current Arm CpuDxe driver uses EFI_MEMORY_WP for write protection,
according to UEFI spec, we should use EFI_MEMORY_RO for write protection.
The EFI_MEMORY_WP is the cache attribute instead of memory attribute.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This reverts commit d32702d2c2.
Using a pool allocation for the root translation table seemed like
a good idea at the time, but as it turns out, such allocations are
handled in a way that makes them unsuitable for this purpose: they
are backed by HOBs that don't remain in the same place during the
various PI phase changes, which means the address programmed into
the TTBR register is no longer valid, and may refer to memory that
is reported as available to the OS.
So switch back to using a page based allocation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The generic timer support libraries call the actual system register
accessor function via a single pair of functions ArmArchTimerReadReg()
and ArmArchTimerWriteReg(), which take an enum argument to identify
the register, and return output values by pointer reference.
Since these functions are never called with a non-immediate argument,
we can simply replace each invocation with the underlying system register
accessor instead. This is mostly functionally equivalent, with the
exception of the bounds check for the enum (which is pointless given the
fact that we never pass a variable), the check for the presence of the
architected timer (which only makes sense for ARMv7, but is highly unlikely
to vary between platforms that are similar enough to run the same firmware
image), and a check for enum values that refer to the HYP view of the timer,
which we never referred to anywhere in the code in the first place.
So get rid of the middle man, and update the ArmGenericTimerPhyCounterLib
and ArmGenericTimerVirtCounterLib implementations to call the system
register accessors directly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Commit 0a99a65d2c ("fix incorrect device address of double buffer")
retained an explicit cast on the variable "Buffer" which became
incorrect with the other changes, leading to compilation failures
with some toolchains. Drop the cast.
Contributed-under: TianoCore Contribution Agreement 1.0
Reported-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Some devices, such as the Raspberry Pi3, have a fixed offset between memory
addresses as seen by the host and as seen by the other bus masters. So add
a new PCD that allows this fixed offset to be recorded, and to be used when
returning device addresses from the DmaLib mapping routines.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In preparation of adding support to ArmDmalib for DMA bus masters whose
view of memory is offset by a constant compared to the CPU's view, clean
up some abuse of the device address.
The device address is not defined in terms of the CPU's address space,
and so it should not be used in CopyMem () or cache maintenance operations
that require a valid mapping. This not only applies to the above use case,
but also to the DebugUncachedMemoryAllocationLib that unmaps the
primary, cached mapping of an allocation, and returns a host address
which is an uncached alias offset by a constant.
Since we should never access the device address from the CPU, there is
no need to record it in the MAPINFO struct. Instead, record the buffer
address in case of double buffering, since we do need to copy the contents
(in case of a bus master write) and free the buffer (in all cases) when
DmaUnmap() is called.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
If double buffering is not required in DmaMap(), the returned device
address is passed through ConvertToPhysicalAddress () to convert the
host address (which in case of DebugUncachedMemoryAllocationLib is not
1:1 mapped) to a physical address, which is what a device would expect
to be able to perform DMA.
By the same reasoning, a double buffer allocated using DmaAllocateBuffer ()
should be converted in the same way, considering that the buffer is allocated
using UncachedAllocatePages (), to which the above equally applies.
So add the missing ConvertToPhysicalAddress () invocation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Instead of depending on ArmLib to retrieve the CWG directly, use
the DMA buffer alignment exposed by the CPU arch protocol. This
removes our dependency on ArmLib, which makes the library a bit
more architecture independent.
While we're in there, rename gCpu to mCpu to better reflect its
local scope, and reflow some lines that we're modifying anyway.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Translation table walks are always cache coherent on ARMv8-A, so cache
maintenance on page tables is never needed. Since there is a risk of
loss of coherency when using mismatched attributes, and given that memory
is mapped cacheable except for extraordinary cases (such as non-coherent
DMA), restrict the page table walker to performing cacheable accesses to
the translation tables.
For DEBUG builds, retain some of the logic so that we can double check
that the memory holding the root translation table is indeed located in
memory that is mapped cacheable.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This missing dependency has gone unnoticed until now, but it is breaking
the Omap35xxPkg.dsc build.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
AsciiStrCat() is deprecated / disabled under the
DISABLE_NEW_DEPRECATED_INTERFACES feature test macro.
The caller of CpsrString() is required to pass in "ReturnStr" with 32
CHAR8 elements. (DefaultExceptionHandler() complies with this.) "Str" is
used to build "ReturnStr" gradually. Just before calling AsciiStrCat(),
"Str" points to the then-terminating NUL character in "ReturnStr".
The difference (Str - ReturnStr) gives the number of non-NUL characters
we've written thus far, hence (32 - (Str - ReturnStr)) yields the number
of remaining bytes in ReturnStr, including the ultimately terminating NUL
character.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Michael Zimmermann <sigmaepsilon92@gmail.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=164
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=165
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
AsciiStrCat() is deprecated / disabled under the
DISABLE_NEW_DEPRECATED_INTERFACES feature test macro.
The "Str" variable serves no particular purpose in the MRegList() and
ThumbMRegList() functions; replace it with the pointed-to "mMregListStr" /
"mThumbMregListStr" global variable (as appropriate), so that the new
AsciiStrCatS() calls are as clear as possible.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Michael Zimmermann <sigmaepsilon92@gmail.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=164
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=165
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
All users have moved to the generic or accelerated versions in MdePkg,
so remove the obsolete BaseMemoryLibStm.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
As reported by Eugene, the practice of sizing the address space in the
virtual memory system based on the maximum address in the table passed
to ArmConfigureMmu() is problematic, since it fails to take into account
the fact that the GCD memory space may be extended at a later time, both
for memory and for MMIO. So instead, choose the VA size identical to the
GCD memory map size, which is based on PcdPrePiCpuMemorySize on ARM
systems.
Reported-by: Eugene Cohen <eugene@hp.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Currently, we allocate a full page for the root translation table, even
if the configured translation only requires two entries (16 bytes) for
the root level, which happens to be the case for a 40 bit VA. Likewise,
for a 36-bit VA space, the root table only needs 16 entries of 8 bytes
each, adding up to 128 bytes.
So switch to a pool allocation for the root table if we can, but take into
account that the architecture requires it to be naturally aligned to its
size, i.e., a 64 byte table requires 64 byte alignment, whereas pool
allocations in general are only guaranteed to be aligned to 8 bytes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In commit 7d189f99d8 ("ArmPkg/Mmu: Fix bug of aligning new allocated
page table"), we fixed a flaw in the logic regarding alignment of newly
allocated translation table pages. However, we all failed to spot that
aligning page based allocations to page size is rather pointless to
begin with, so simply allocate a single page each time we add new pages
to the translation tables.
Also, drop the unnecessary cast.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The relations between T0SZ, the number of translation levels and the
size/alignment of the root table can be expressed in simple arithmetic
expressions, so get rid of the lookup table.
Note that this disregards the fact that the maximum value of T0SZ is
39 not 42 (as one would expect for the smallest VA size using 2 levels)
but since this corresponds to a VA size of 32 MB and 4 MB, respectively,
neither of which are sufficient to run UEFI, we can safely ignore the
distinction.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
As reported by Vishal, the new backtrace output would be more useful if
it did not contain the full absolute path of each module in the list.
So strip off everything up to the last forward slash or backslash in the
string.
Example output:
IRQ Exception at 0x000000005EF110E0
DxeCore.dll loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EF121F0) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EF1289C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFB6B4) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFAA44) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFB450) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF938C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF8D04) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEFA8E8) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF3C14) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEF3E48) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EF0C838) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEEF70C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEEE93C) loaded at 0x000000005EEED000
called from DxeCore.dll (0x000000005EEEE024) loaded at 0x000000005EEED000
Suggested-by: "Oliyil Kunnil, Vishal" <vishalo@qti.qualcomm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
For historical reasons, the files under ArmLib are split up into 'common'
files under Common/, containing common C files as well as AArch64 and Arm
specific asm files, and ArmV7 and AArch64 files under ArmV7/ and AArch64/,
respectively. This presumably dates back to the time when ArmLib supported
different revisions of the 32-bit architecture (i.e., pre-V7)
Since the PI spec requires V7 or later, we can simplify this to Arm/ and
AArch64, which aligns ArmLib with the majority of other modules that carry
ARM or AArch64 specific code.
So move the files around so that shared files live at the same level as
ArmBaseLib.inf, and ARM/AArch64 specific files live in Arm/ or AArch64/,
respectively.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The ArmBaseLib timer code does not depend on MemoryAllocationLib at
all, so remove the #includes referring to it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This removes the following ArmLib implementation, which were, apart from
the fact that they targeted either ARM or AARCH64, fully identical:
ArmPkg/Library/ArmLib/AArch64/AArch64Lib.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibPei.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibPrePi.inf
ArmPkg/Library/ArmLib/AArch64/AArch64LibSec.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7Lib.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7LibPrePi.inf
ArmPkg/Library/ArmLib/ArmV7/ArmV7LibSec.inf
Only ArmBaseLib remains, which can fulfil the dependencies upon each of
the listed flavors.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Introduce a new ArmLib version ArmBaseLib, which encapsulates the ARM
version ArmV7Lib and the AArch64 version AArch64Lib.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Remove the NULL instance of ArmLib: it is not currently used, and its
usefulness its dubious.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
When dumping the CPU state after an unhandled fault, walk the stack
frames and decode the return addresses so we can show a minimal
backtrace. Unfortunately, we do not have sufficient information to
show the function names, but at least we can see the modules and the
return addresses inside the modules.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Clang does not like separate definitions for the __alias__ and the
__weak__ attributes, so merge the definitions into one.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
After the recent update of CompilerIntrinsicsLib, our memset() is no
longer emitted as a weak symbol. On ARM, this may cause problems when
combining this library with another library that supplies memset() [e.g.,
CryptoPkg/IntrinsicLib], due to the fact that the object also supplies
the __aeabi_memXXX entry points, which can only be satisfied by this
object. So make our memset() weak again, to let the other implementation
take precedence.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>