This patch fixes the following Ecc reported error:
There should be no initialization of a variable as
part of its declaration
Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>
This patch fixes the following Ecc reported error:
The body of a function should be contained by open
and close braces that must be in the first column
Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>
This patch fixes the following Ecc reported error:
Non-Boolean comparisons should use a compare operator
(==, !=, >, < >=, <=)
Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>
The local #define TT_ATTR_INDX_INVALID is used as a local error code
in the AArch64 implementation, but is misleadingly named to match the
definitions in ArmPkg/Include/Chipset/AArch64Mmu.h.
Rename it INVALID_ENTRY to reduce confusion and improve readability.
Signed-off-by: Leif Lindholm <leif@nuviainc.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>
The routine PageAttributeToGcdAttribute() is exported by ArmMmuLib
but only ever used in the implementation of CpuDxe. So let's move
the function there and make it STATIC.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@arm.com>
Reviewed-by: Leif Lindholm <leif@nuviainc.com>
Before getting rid of GetRootTranslationTableInfo() and the related
LookupAddresstoRootTable() in AARCH64's version of ArmMmuLib, add a
version of the former to CpuDxe, which will be its only remaining
user. While at it, simplify it a bit, since in the CpuDxe cases,
both OUT arguments are always provided.
Note that this removes the declaration of GetRootTranslationTableInfo()
as well, but this is a declaration that is private to CpuDxe, and it
really doesn't belong here in the first place. Since ArmMmuLib's version
of GetRootTranslationTableInfo() is going to be replaced shortly anyway,
don't bother moving this .h declaration elsewhere.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@arm.com>
Reviewed-by: Leif Lindholm <leif@nuviainc.com>
GetMemoryRegion() is used to obtain the attributes of an existing
mapping, to permit permission attribute changes to be optimized
away if the attributes don't actually change.
The current ARM code assumes that a section mapping or a page mapping
exists for any region passed into GetMemoryRegion(), but the region
may be unmapped entirely, in which case the code will crash. So check
if a section mapping exists before dereferencing it.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Commit 61a7b0ec63 ("ArmPkg/Gic: force GIC driver to run before CPU arch
protocol driver", 2018-02-06) explains why CpuDxe should be dispatched
after ArmGicDxe.
To implement the ordering, we should use a regular protocol depex rather
than the less flexible AFTER opcode. ArmGicDxe installs
gHardwareInterruptProtocolGuid and gHardwareInterrupt2ProtocolGuid as one
of the last actions on its entry point stack; either of those is OK for
CpuDxe to wait for.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Steve Capper <steve.capper@linaro.org>
Cc: Supreeth Venkatesh <Supreeth.Venkatesh@arm.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Currently, the GIC driver has a static dependency on the CPU arch protocol
driver, so it can register its IRQ handler at init time. This means there
is a window between dispatch of the CPU driver and dispatch of the GIC
driver where any unexpected GIC state may trigger an interrupt which we
are not set up to handle yet. Note that this is even the case if we enter
UEFI with interrupts disabled at the CPU, given that any TPL manipulation
involving TPL_HIGH_LEVEL will unconditionally enable IRQs at the CPU side
regardless of whether they were enabled to begin with (but only as soon as
the CPU arch protocol is actually installed)
So let's reorder the GIC driver with the CPU driver, and let it run its
initialization that puts the GIC into a known state before enabling
interrupts. Move its installation of its IRQ handler to a protocol notify
callback on the CPU arch protocol so that it runs as soon as it becomes
available.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
gEfiDebugSupportPeriodicCallbackProtocolGuid and
PcdCpuDxeProduceDebugSupport are referred to from CpuDxe.
Delete references from .inf and .h.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Some memory attributes are implied by the memory type, e.g., device memory
is always mapped non-executable and cached memory should have the inner
shareable attribute.
In order to prevent unnecessary memory attribute updates of mappings
created early on, make EfiAttributeToArmAttribute() return these implied
attributes in the same way as ArmMmuLib does already. This avoids false
positives when looking for differences between current and desired mapping
attributes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
We no longer make use of the ArmMmuLib 'feature' to create aliased
memory ranges with mismatched attributes, and in fact, it was only
wired up in the ARM version to begin with.
So remove the VirtualMask argument from ArmSetMemoryAttributes()'s
prototype, and remove the dead code that referred to it.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
... where it belongs, since AARCH64 already keeps it there, and
non DXE users of ArmMmuLib (such as DxeIpl, for the non-executable
stack) may need its functionality as well.
While at it, rename SetMemoryAttributes to ArmSetMemoryAttributes,
and make any functions that are not exported STATIC. Also, replace
an explicit gBS->AllocatePages() call [which is DXE specific] with
MemoryAllocationLib::AllocatePages().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Enable the use of strict memory permissions on ARM by processing the
EFI_MEMORY_RO and EFI_MEMORY_XP rather than ignoring them. As before,
calls to CpuArchProtocol::SetMemoryAttributes that only set RO/XP
bits will preserve the cacheability attributes. Permissions attributes
are not preserved when setting the memory type only: the way the memory
permission attributes are defined does not allows for that, and so this
situation does not deviate from other architectures.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Page and section entries in the page tables are updated using the
helper ArmUpdateTranslationTableEntry(), which cleans the page
table entry to the PoC, and invalidates the TLB entry covering
the page described by the entry being updated.
Since we may be updating section entries, we might be leaving stale
TLB entries at this point (for all pages in the section except the
first one), which will be invalidated wholesale at the end of
SetMemoryAttributes(). At that point, all caches are cleaned *and*
invalidated as well.
This cache maintenance is costly and unnecessary. The TLB maintenance
is only necessary if we updated any section entries, since any page
by page entries that have been updated will have been invalidated
individually by ArmUpdateTranslationTableEntry().
So drop the clean/invalidate of the caches, and only perform the
full TLB flush if UpdateSectionEntries() was called, or if sections
were split by UpdatePageEntries(). Finally, make the cache maintenance
on the remapped regions themselves conditional on whether any memory
type attributes were modified.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Currently, any range passed to CpuArchProtocol::SetMemoryAttributes is
fully broken down into page mappings if the start or the size of the
region happens to be misaliged relative to the section size of 1 MB.
This is going to result in memory being wasted on second level page tables
when we enable strict memory permissions, given that we remap the entire
RAM space non-executable (modulo the code bits) when the CpuArchProtocol
is installed.
So refactor the code to iterate over the range in a way that ensures
that all naturally aligned section sized subregions are not broken up.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
To prevent the initial MMU->GCD memory space map synchronization from
stripping permissions attributes [which we cannot use in the GCD memory
space map, unfortunately], implement the same approach as x86, and ignore
SetMemoryAttributes() calls during the time SyncCacheConfig() is in
progress. This is a horrible hack, but is currently the only way we can
implement strict permissions on arbitrary memory regions [as opposed to
PE/COFF text/data sections only]
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Virtual uncached pages are simply pages that are aliased using mismatched
attributes, which is not allowed by the ARM architecture. So remove the
protocol and its implementation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Since the new DXE page protection for PE/COFF images may invoke
EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes() with only permission
attributes set, add support for this in the AARCH64 MMU code.
Move the EFI_MEMORY_CACHETYPE_MASK macro to a shared location between
CpuDxe and ArmMmuLib so we don't have to introduce yet another
definition.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Currently, we have not implemented support on 32-bit ARM for managing
permission bits in the page tables. Since the new DXE page protection
for PE/COFF images may invoke EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes()
with only permission attributes set, let's simply ignore those for now.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
The single user of EfiAttributeToArmAttribute () is the protocol
method EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes(), which uses the
return value to compare against the ARM attributes of an existing mapping,
to infer whether it is actually necessary to change anything, or whether
the requested update is redundant. This saves some cache and TLB
maintenance on 32-bit ARM systems that use uncached translation tables.
However, EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes() may be invoked with
only permission bits set, in which case the implied requested action is to
update the permissions of the region without modifying the cacheability
attributes. This is currently not possible, because
EfiAttributeToArmAttribute () ASSERT()s [on AArch64] on Attributes arguments
that lack a cacheability bit.
So let's simply return TT_ATTR_INDX_MASK (AArch64) or
TT_DESCRIPTOR_SECTION_TYPE_FAULT (ARM) in these cases (or'ed with the
appropriate permission bits). This way, the return value is equally
suitable for checking whether the attributes need to be modified, but
in a way that accommodates the use without a cacheability bit set.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Current Arm CpuDxe driver uses EFI_MEMORY_WP for write protection,
according to UEFI spec, we should use EFI_MEMORY_RO for write protection.
The EFI_MEMORY_WP is the cache attribute instead of memory attribute.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The DmaBufferAlignment currently defaults to 4, which is dangerously
small and may result in lost data on platforms that perform non-coherent
DMA. So instead, take the CWG value from the cache info registers.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
During Mmu initialization in the CpuDxe, for a page table any bits set
in the 'NextSectionAttributes' are garbage and were set from bits that
are actually part of the pagetable address. We clear it out to zero
so that the SyncCacheConfigPage will use the page attributes instead
of trying to convert the (bogus) section attributes into page
attributes.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Kurt Kennett <kurt.kennett@microsoft.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Switch all users of ArmLib that depend on the MMU routines to the new,
separate ArmMmuLib. This needs to occur in one go, since the MMU
routines are removed from ArmLib build at the same time, to prevent
conflicting symbols.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
SErrors (formerly called asynchronous aborts) are a distinct class of
exceptions that are not closely tied to the currently executing
instruction. Since execution may be able to proceed in such a condition,
this class of exception is masked by default, and software needs to unmask
it explicitly if it is prepared to handle such exceptions.
On DEBUG builds, we are well equipped to report the CPU context to the user
and it makes sense to report an SError as soon as it occurs rather than to
wait for the OS to take it when it unmasks them, especially since the current
arm64/Linux implementation simply panics in that case. So unmask them when
ArmCpuDxe loads.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Use the new ARM/AArch64 implementation of the base
CpuExceptionHandlerLib library from CpuDxe to centralize
exception handling.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Update the CpuDxe driver to remove an assumption that it is the only
component modifying interrupt state since this can be done through BaseLib
as well. Instead of using a global variable for last interrupt state we
now check the current PSTATE value directly.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Eugene Cohen <eugene@hp.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The CLANG assembler does not support the legacy, non-unified assembler syntax,
i.e., it does not support the reordering of the condition suffixes with the
increment/decrement before/after or byte/word suffixes, and it does not
recognize the 'empty descending' (ED) suffix at all. So move to the unified
syntax, and replace 'empty descending' with 'decrement after' or 'increment
before' as appropriate.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19280 6f19259b-4bc3-4df7-8a09-765794883524
Since we do not support anything below ARMv7, let's promote the ARMv6
exception handling code in CpuDxe to the only version we provide for
ARM. This means we can drop the unused ARMv4 version.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19273 6f19259b-4bc3-4df7-8a09-765794883524
We currently rely on .align directives to ensure that each exception
vector entry is the appropriate offset from the vector base address.
This is slightly fragile, as were an entry to become too large (greater
than 32 A64 instructions), all following entries would be silently
shifted until they meet the next alignment boundary. Thus we might
execute the wrong code in response to an exception.
To prevent this, introduce a new macro, VECTOR_ENTRY, that uses .org
directives to position each entry at the precise required offset from
the base of a vector. A vector entry which is too large will trigger a
build failure rather than a runtime failure which is difficult to debug.
For consistency, the base and end of each vector is similarly annotated,
with VECTOR_BASE and VECTOR_END, which provide the necessary alignment
and symbol exports. The now redundant directives and labels are removed.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18904 6f19259b-4bc3-4df7-8a09-765794883524
Interrupt must be disabled before we storing ELR and other system
registers, or else ELR will be overridden by interrupt reentrance.
This bug is critical as we may get occasional exception or dead loop
when interrupt reentrance occurs:
After increasing SP ... Before popping out registers
Or
After restoring ELR
The 1st circumstance could also be resolved by optimizing SP operation
(Pop out registers before adding SP back), but the 2nd could not be
resolved by disabling interrupt.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18538 6f19259b-4bc3-4df7-8a09-765794883524
When the function that determines the size of a contiguous region
was returning from a sub-level table scanning it was forgetting to
move to the next entry of its own level table.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Olivier Martin <Olivier.Martin@arm.com>
Reviewed-by: Ronald Cron <Ronald.Cron@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17832 6f19259b-4bc3-4df7-8a09-765794883524
The AArch64 Vector Table must be aligned on a 2K boundary.
The FDF specification does not support 2K alignment but support 4K.
A clear comment has been added to help integrator to understand why the
assertion fails when porting to a new AArch64 platform.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Olivier Martin <olivier.martin@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@15659 6f19259b-4bc3-4df7-8a09-765794883524
The function should detect and return the error in non-debug builds when the ASSERT does nothing.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Olivier Martin <olivier.martin@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@15606 6f19259b-4bc3-4df7-8a09-765794883524
If the first entry of the memory map is in the third level (case when the region
at 0x0 is smaller than 4KB) then its descriptor type would be TT_TYPE_BLOCK_ENTRY_LEVEL3
(=0x3) which has the same value as TT_TYPE_TABLE_ENTRY (=0x3).
The first condition in GetFirstPageAttribute() needed the table level
to not mix these two descriptor types.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Olivier Martin <olivier.martin@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@15526 6f19259b-4bc3-4df7-8a09-765794883524
The AARCH64 LDR and STR instructions only support signed offsets for post- and
pre-indexed addressing. For normal signed offset addressing, the mnemonic is
STUR. GNU As automatically assembles STR with signed offset as STUR, but Clang's
integrated assembler doesn't.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Reviewed-by: Olivier Martin <olivier.martin@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@15508 6f19259b-4bc3-4df7-8a09-765794883524
GNU as assembles instructions without the '#' before immediates. Clang doesn't.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Reviewed-by: Olivier Martin <olivier.martin@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@15507 6f19259b-4bc3-4df7-8a09-765794883524
This ensures the .type directive is used to mark them as function symbols
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Reviewed-by: Olivier Martin <olivier.martin@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@15506 6f19259b-4bc3-4df7-8a09-765794883524
Some non-GCC toolchain might support the GNU assembly language.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Reviewed-by: Olivier Martin <olivier.martin@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@15504 6f19259b-4bc3-4df7-8a09-765794883524
Current EDK2 source code does actually trigger nested interrupted (even if
the PI spec says interrupt should not be nested).
This issue has highlighted the lack of restoring ELR_EL2/ELR_EL1 register.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off: Vijayakumar Subbu <vsubbu@nvidia.com>
Signed-off: Olivier Martin <olivier.martin@arm.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@15481 6f19259b-4bc3-4df7-8a09-765794883524