ArmPkg/ArmMmuLib AARCH64: cache-invalidate initial page table entries

In the AARCH64 version of ArmMmuLib, we are currently relying on
set/way invalidation to ensure that the caches are in a consistent
state with respect to main memory once we turn the MMU on. Even if
set/way operations were the appropriate method to achieve this, doing
an invalidate-all first and then populating the page table entries
creates a window where page table entries could be loaded speculatively
into the caches before we modify them, and shadow the new values that
we write there.

So let's get rid of the blanket clean/invalidate operations, and
instead, update ArmUpdateTranslationTableEntry () to invalidate each
page table entry *after* it is written if the MMU is still disabled
at this point.

On ARMv8, it is guaranteed that memory accesses done by the page table
walker are cache coherent, and so we can ignore the case where the
MMU is on.

Since the MMU and D-cache are already off when we reach this point, we
can drop the MMU and D-cache disables as well. Maintenance of the I-cache
is unnecessary, since we are not modifying any code, and the installed
mapping is guaranteed to be 1:1. This means we can also leave it enabled
while the page table population code is running.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif@nuviainc.com>
This commit is contained in:
Ard Biesheuvel 2020-02-26 09:40:33 +01:00 committed by mergify[bot]
parent 02d7797d1a
commit 3391e20ffa
2 changed files with 8 additions and 10 deletions

View File

@ -13,6 +13,8 @@
.set DAIF_RD_FIQ_BIT, (1 << 6) .set DAIF_RD_FIQ_BIT, (1 << 6)
.set DAIF_RD_IRQ_BIT, (1 << 7) .set DAIF_RD_IRQ_BIT, (1 << 7)
.set SCTLR_ELx_M_BIT_POS, (0)
ASM_FUNC(ArmReadMidr) ASM_FUNC(ArmReadMidr)
mrs x0, midr_el1 // Read from Main ID Register (MIDR) mrs x0, midr_el1 // Read from Main ID Register (MIDR)
ret ret
@ -122,11 +124,16 @@ ASM_FUNC(ArmUpdateTranslationTableEntry)
lsr x1, x1, #12 lsr x1, x1, #12
EL1_OR_EL2_OR_EL3(x0) EL1_OR_EL2_OR_EL3(x0)
1: tlbi vaae1, x1 // TLB Invalidate VA , EL1 1: tlbi vaae1, x1 // TLB Invalidate VA , EL1
mrs x2, sctlr_el1
b 4f b 4f
2: tlbi vae2, x1 // TLB Invalidate VA , EL2 2: tlbi vae2, x1 // TLB Invalidate VA , EL2
mrs x2, sctlr_el2
b 4f b 4f
3: tlbi vae3, x1 // TLB Invalidate VA , EL3 3: tlbi vae3, x1 // TLB Invalidate VA , EL3
4: dsb nsh mrs x2, sctlr_el3
4: tbnz x2, SCTLR_ELx_M_BIT_POS, 5f
dc ivac, x0 // invalidate in Dcache if MMU is still off
5: dsb nsh
isb isb
ret ret

View File

@ -699,15 +699,6 @@ ArmConfigureMmu (
ZeroMem (TranslationTable, RootTableEntryCount * sizeof(UINT64)); ZeroMem (TranslationTable, RootTableEntryCount * sizeof(UINT64));
// Disable MMU and caches. ArmDisableMmu() also invalidates the TLBs
ArmDisableMmu ();
ArmDisableDataCache ();
ArmDisableInstructionCache ();
// Make sure nothing sneaked into the cache
ArmCleanInvalidateDataCache ();
ArmInvalidateInstructionCache ();
TranslationTableAttribute = TT_ATTR_INDX_INVALID; TranslationTableAttribute = TT_ATTR_INDX_INVALID;
while (MemoryTable->Length != 0) { while (MemoryTable->Length != 0) {