Clang does not like separate definitions for the __alias__ and the
__weak__ attributes, so merge the definitions into one.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
After the recent update of CompilerIntrinsicsLib, our memset() is no
longer emitted as a weak symbol. On ARM, this may cause problems when
combining this library with another library that supplies memset() [e.g.,
CryptoPkg/IntrinsicLib], due to the fact that the object also supplies
the __aeabi_memXXX entry points, which can only be satisfied by this
object. So make our memset() weak again, to let the other implementation
take precedence.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
BaseMemoryLib has recently been extended with an API function
IsZeroBuffer(), so copy the default implementation into BaseMemoryLibStm
as well.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
BaseMemoryLib has recently been extended with an API function
IsZeroGuid(), so copy the default implementation into BaseMemoryLibStm
as well.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The BaseMemoryLibVstm implementation of BaseMemoryLib is ARM only, uses
the NEON register file despite the fact that the UEFI spec does not allow
it, and is currently not used anywhere. So remove it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This replaces the various implementations of memset and memcpy,
including the ARM RTABI ones (__aeabi_mem[set|clr]_[|4|8]) with
a single C implementation for each. The ones we have are either not
very sophisticated (ARM), or they are too sophisticated (memcpy() on
AARCH64, which may perform unaligned accesses) or already coded in C
(memset on AArch64).
The Tianocore codebase mandates the explicit use of its SetMem() and
CopyMem() equivalents, of which various implementations exist for use
in different contexts (PEI, DXE). Few compiler generated references to
these functions should remain, and so our implementations in this BASE
library should be small and usable with the MMU off.
So replace them with a simple C implementation that builds correctly
on GCC/AARCH64, CLANG/AARCH64, GCC/ARM, CLANG/ARM and RVCT/ARM.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections. Note that in some cases, various entry points
refer to different parts of the same routine, so in those cases,
the files have been left untouched.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Annotate functions with ASM_FUNC() so that they are emitted into
separate sections.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The C language is powerful enough to implement a function that does
absolutely nothing, so there is no need to resort to implementations
in assembler for various toolchains/architectures.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The function ArmReplaceLiveTranslationEntry() has been moved to
ArmMmuLib, so remove the old implementation from ArmLib.
Note that the new implementation was not exported from the object file,
and so references to it were satisfied by the old version residing in
ArmLib. Since we are removing that one, we need to export the new one
at the same time to prevent the linker from bailing with undefined
reference errors.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The ARM compiler intrinsics library defines __aeabi_memset() and
memset() in the same object, which means that both will be pulled
in if either is referenced.
The IntrinsicLib in CryptoPkg defines its own, preferred memset(),
which may clash with our memset(). So make our version weak.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Building ArmSoftFloatLib with LTO results in errors like
.../bin/ld: softfloat.obj: plugin needed to handle lto object
.../bin/ld: __aeabi_dcmpge.obj: plugin needed to handle lto object
.../bin/ld: __aeabi_dcmplt.obj: plugin needed to handle lto object
.../bin/ld: internal error ../../ld/ldlang.c 6299
This library is only linked by OpensslLib at the moment, and only
marginally used at runtime, so just disable LTO for it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
GCC in LTO mode interoperates poorly with non-standard libraries that
provide implementations of compiler intrinsics such as memcpy/memset
or the stack protector entry points. Such libraries need to be built
in non-LTO mode, and then referenced explicitly on the linker command
line using a -plugin-opt=-pass-through=-lxxx linker option.
However, if these intrinsics are also referenced directly, the LTO
version of the code will be pulled in, and will happily satisfy all
other references to the same symbol.
So add a pair of glue libraries, for ARM and AARCH64, that reference
the known intrinsics. Since the binaries live under ArmPkg directly,
we can reference them in tools_def.txt. Under LD garbage collection,
the object itself will be pruned, and so will the intrinsics that end
up unused by the module.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
ArmLib defines a prototype for the ArmReadSctlr() function, but the
AArch64 implementation is missing. So add it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: John Powell <john.powell@arm.com>
Signed-off-by: Supreeth Venkatesh <supreeth.venkatesh@arm.com>
[ardb: update commit log]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This introduces a special version of ArmMmuLib for PEIMs that takes care
only to perform cache maintenance on the live entry replacement routine
if the module is not executing in place. Not only is such cache maintenance
unnecessary in that case, it may be actively harmful on some systems that
fail to tolerate cache maintenance operations on NOR flash regions.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Switch all users of ArmLib that depend on the MMU routines to the new,
separate ArmMmuLib. This needs to occur in one go, since the MMU
routines are removed from ArmLib build at the same time, to prevent
conflicting symbols.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This base library encapsulates the MMU manipulation routines that have been
factored out of ArmLib. The functionality covers initial creation of the 1:1
mapping in the page tables, and remapping regions to change permissions or
cacheability attributes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Putting DEBUG () code after an ASSERT (FALSE) statement is not very
useful, since the code will be unreachable on DEBUG builds and compiled
out on RELEASE builds. So move the ASSERT () statement after it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
On some platforms, performing cache maintenance on regions that are backed
by NOR flash result in SErrors. Since cache maintenance is unnecessary in
that case, create a PEIM specific version that only performs said cache
maintenance in its constructor if the module is shadowed in RAM. To avoid
performing the cache maintenance if the MMU code is not used to begin with,
check that explicitly in the constructor.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This implements the platform glue for the new generic BDS implementation.
It is based on the ArmVirtQemu version, with the QEMU references removed.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Instead of cleaning the data cache to the PoU by virtual address and
subsequently invalidating the entire I-cache, invalidate only the
range that we just cleaned. This way, we don't invalidate other
cachelines unnecessarily.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
When we split a block entry into a table entry, the UXN/PXN/XN permission
attributes are inherited both by the new table entry and by the new block
entries at the next level down. Unlike the NS bit, which only affects the
next level of lookup, the XN table bits supersede the permissions of the
final translation, and setting the permissions at multiple levels is not
only redundant, it also prevents us from lifting XN restrictions on a
subregion of the original block entry by simply clearing the appropriate
bits at the lowest level.
So drop the code that sets the UXN/PXN/XN bits on the table entries.
Reported-by: "Oliyil Kunnil, Vishal" <vishalo@qti.qualcomm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
DmaMap () only allows uncached mappings to be used for creating consistent
mappings with operation type MapOperationBusMasterCommonBuffer. However,
if the buffer passed to DmaMap () happens to be aligned to the CWG, there
is no need for a bounce buffer, and we perform the cache maintenance
directly without ever checking if the memory attributes of the buffer
adhere to the API.
So add some debug code that asserts that the operation type and the memory
attributes are consistent.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
In the DmaMap () operation, if the region to be mapped happens to be
aligned to the Cache Writeback Granule (CWG) (whose value is typically
64 or 128 bytes and 2 KB maximum), we remap the memory as uncached.
Since remapping memory occurs at page granularity, while the buffer and the
CWG may be much smaller, there is no telling what other memory we affect
by doing this, especially since the operation is not reverted in DmaUnmap().
So remove the remapping call.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
DmaMap () operations of type MapOperationBusMasterCommonBuffer should
return a mapping that is coherent between the CPU and the device. For
this reason, the API only allows DmaMap () to be called with this operation
type if the memory to be mapped was allocated by DmaAllocateBuffer (),
which in this implementation guarantees the coherency by using uncached
mappings on the CPU side.
This means that, if we encounter a cached mapping in DmaMap () with this
operation type, the code is either broken, or someone is violating the
API, but simply proceeding with a double buffer makes no sense at all,
and can only cause problems.
So instead, actively reject this operation type for cached memory mappings.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Comparing a GCD attribute field directly against EFI_MEMORY_UC and
EFI_MEMORY_WT is incorrect, since it may have other bits set as well
which are not related to the cacheability of the region. So instead,
test explicitly against the flags EFI_MEMORY_WB and EFI_MEMORY_WT,
which must be set if the region may be mapped with cacheable attributes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
We manage to use both an AND operation with 'gCacheAlignment - 1' and a
modulo operation with 'gCacheAlignment' in the same compound if statement.
Since gCacheAlignment is a global of which the compiler cannot guarantee
that it is a power of two, simply use the AND version in both cases.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The allocation function UncachedAllocatePages () may return NULL, in
which case our implementation of DmaAllocateBuffer () should return
EFI_OUT_OF_RESOURCES rather than silently ignoring the NULL value and
returning EFI_SUCCESS.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This adds a partial stack dump (256 bytes at either side of the stack
pointer) to the CPU state dumping routine that is invoked when taking an
unexpected exception. Since dereferencing the stack pointer may itself
fault, ensure that we don't enter the dumping routine recursively.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
The default exception handler, which is essentially the one that is invoked
for unexpected exceptions, ends with an ASSERT (FALSE), to ensure that
execution halts after dumping the CPU state. However, ASSERTs are compiled
out in RELEASE builds, and since we simply return to wherever the ELR is
pointing, we will not make any progress in case of synchronous aborts, and
the same exception will be taken again immediately, resulting in the string
'Exception at 0x....' to be printed over and over again.
So use an explicit deadloop instead.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
On ARM, manipulating live page tables is cumbersome since the architecture
mandates the use of break-before-make, i.e., replacing a block entry with
a table entry requires an intermediate step via an invalid entry, or TLB
conflicts may occur.
Since it is not generally feasible to decide in the page table manipulation
routines whether such an invalid entry will result in those routines
themselves to become unavailable, use a function that is callable with
the MMU off (i.e., a leaf function that does not access the stack) to
perform the change of a block entry into a table entry.
Note that the opposite should never occur, i.e., table entries are never
coalesced into block entries.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Now XN attribute will be set automatically if the region is declared
as device memory. However, the function ArmMemoryAttributeToPageAttribute
is to get attribute for block and page descriptors, not for table
descriptors, so attribute TT_TABLE_*XN does not really take effect.
Need to use TT_*XN_MASK instead.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Some minor typographical problems were noticed during previous commits.
This change corrects those, and contains no functional modifications.
The changes are in comments, and one diagnostic message.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The TimerFreq variable in the TimerConstructor() is unused in RELEASE
builds since ASSERTs are then disabled.
The only use of the variable (in the ASSERT) is replaced by a direct
invocation of the function previously used to set it.
NOTE: The build tools suppress warnings of this using compiler options
eg. -Wno-unused-but-set-variable for GCC toolchain or
--diag_suppress=550 for RVCT toolchain.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
FirmwarePerformanceDxe.c utilizes the Timer Library function
GetTimeInNanoSecond() which was not implemented by the ArmArchTimerLib.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This refactors some timer code to define MultU64xN as a preprocessor
symbol rather than a function pointer, and to factor out the code that
obtains the timer frequency into GetPlatformTimerFreq ().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Evan Lloyd <evan.lloyd@arm.com>
Reviewed-by: Ryan Harkin <ryan.harkin@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.0
[ard.biesheuvel: split off from 'add GetTimeInNanoSecond() to ArmArchTimerLib']
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The function ArmClearMemoryRegionReadOnly() was supposed to undo the
effect of ArmSetMemoryRegionReadOnly(), but instead, it sets the permissions
to EL0-no access, EL1-read-only. Since the EL0 bit should be 1 to align
with EL2/3 (where the bit is SBO), use TT_AP_RW_RW instead, which makes the
entry read-write for EL0 when executing at EL1, and read-write for all other
levels.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This replaces the somewhat opaque preprocessor based stack/unstack macros
with open coded ldp/stp sequences to preserve the interrupted context
before handing over to the exception handler in C.
This removes various arithmetic operations on the stack pointer, and
reduces the exception return critical section to its minimum size (i.e.,
the bare minimum required to populate the ELR and SPSR registers and invoke
the eret).
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
If we are using the vector table in place, there is no need to make an
indirect call to the common handler routine from the vector table entries,
so just use a straight branch instruction in that case.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
The global gArmRelocateVectorTable is a build time constant, but due to
its external linkage and lack of constness, the compiler does not see that.
So turn it into a static boolean, and at the same time, make the function
CopyExceptionHandlers() (which is only called if gArmRelocateVectorTable is
set) static as well, so that the compiler can eliminate it completely if
we are using the vector table in place.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
ESR and FAR are populated by the hardware upon exception entry, and
describe the exception, not the interrupted context. So there is no point
in restoring their values before returning from the exception.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>
We have three code paths to stack/unstack the exception context, one for
each of EL3, EL2 and EL1. However, they all access the same copy of FPSR
so move that access to the common path.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Eugene Cohen <eugene@hp.com>