There are two libraries: MdePkg/CpuLib and UefiCpuPkg/UefiCpuLib and
UefiCpuPkg/UefiCpuLib will be merged to MdePkg/CpuLib. To avoid build
failure, add CpuLib dependency to all modules that depend on UefiCpuLib.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Yu Pu <yu.pu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF? https://bugzilla.tianocore.org/show_bug.cgi?id=3815
This patch define a new Protocol with the new services
SmmWaitForAllProcessor(), which can be used by SMI handler
to optionally wait for other APs to complete SMM rendezvous in
relaxed AP mode.
A new library SmmCpuRendezvousLib is provided to abstract the service
into library API to simple SMI handler code.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Siyuan Fu <siyuan.fu@intel.com>
Cc: Zhihao Li <zhihao.li@intel.com>
Signed-off-by: Zhihao Li <zhihao.li@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3790
Replace Opcode with the corresponding instructions.
The code changes have been verified with CompareBuild.py tool, which
can be used to compare the results of two different EDK II builds to
determine if they generate the same binaries.
(tool link: https://github.com/mdkinney/edk2/tree/sandbox/CompareBuild)
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3737
Apply uncrustify changes to .c/.h files in the UefiCpuPkg package
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3767
Update use of DEBUG_CODE(Expression) if Expression is a complex code
block with if/while/for/case statements that use {}.
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3760
Update all use of ', OPTIONAL' to ' OPTIONAL,' for function params.
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3739
Update all use of EFI_D_* defines in DEBUG() macros to DEBUG_* defines.
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
When CET shadow stack feature is enabled, it needs to use IST for the
exceptions, and uses interrupt shadow stack for the stack switch.
Shadow stack should be 32 bytes aligned.
Check IST field, when clear shadow stack token busy bit when using retf.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3728
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3621
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3631
Current CPU feature initialization design:
During normal boot, CpuFeaturesPei module (inside FSP) initializes the
CPU features. During S3 boot, CpuFeaturesPei module does nothing, and
CpuSmm driver (in SMRAM) initializes CPU features instead.
This code change prevents CpuSmm driver from re-initializing CPU
features during S3 resume if CpuFeaturesPei module has done the same
initialization.
In addition, EDK2 contains DxeIpl PEIM that calls S3RestoreConfig2 PPI
during S3 boot and this PPI eventually calls CpuSmm driver (in SMRAM) to
initialize the CPU features, so "EDK2 + FSP" does not have the CPU
feature initialization issue during S3 boot. But "coreboot" does not
contain DxeIpl PEIM and the issue appears, unless
"PcdCpuFeaturesInitOnS3Resume" is set to TRUE.
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3621
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3631
Refactor initialization of CPU features during S3 resume.
In addition, the macro ACPI_CPU_DATA_STRUCTURE_UPDATE is used to fix
incompatibility issue caused by ACPI_CPU_DATA structure update. It will
be removed after all the platform code uses new ACPI_CPU_DATA structure.
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2956
In functions ReadSaveStateRegisterByIndex and WriteSaveStateRegister:
* check width > 4 instead of >= 4 when writing upper 32 bytes.
- This improves the code but will not affect functionality.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Mark Wilson <Mark.Wilson@amd.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3584
Function AsmCpuid should first check the value for Basic CPUID Information.
The fix is to update the mPatchCetSupported judgment statement.
Signed-off-by: Wenxing Hou <wenxing.hou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Sheng W <w.sheng@intel.com>
Cc: Yao Jiewen <jiewen.yao@intel.com>
When enter SMM exception, there will be a stack switch only if the IST
field of the interrupt gate is set. When CET shadow stack feature is
enabled, if there is a stack switch between SMM exception and SMM, the
shadow stack token busy bit needs to be cleared when return from SMM
exception to SMM. In UEFI BIOS, only page fault exception does the stack
swith when SMM shack guard feature is enabled. The condition of clear
shadow stack token busy bit should be SMM stack guard enabled, CET shadows
stack feature enabled and page fault exception.
The shadow stack token should be initialized by UINT64.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3462
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Qihua Zhuang <qihua.zhuang@intel.com>
Cc: Daquan Dong <daquan.dong@intel.com>
Cc: Justin Tong <justin.tong@intel.com>
Cc: Tom Xu <tom.xu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
5-level paging can be enabled on CPU which supports up to 52 physical
address size. But when the feature was enabled, the 48 address size
limit was not removed and the 5-level paging testing didn't access
address >= 2^48. So the issue wasn't detected until recently an
address >= 2^48 is accessed.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3300
Current implementation of SetStaticPageTable routine in PiSmmCpuDxeSmm
driver will check a global variable mPhysicalAddressBits, and eventually
cap any value larger than 39 at 39.
This global variable is used in ConvertMemoryPageAttributes, which backs
SmmSetMemoryAttributes and SmmClearMemoryAttributes. Thus for a processor
that supports more than 39 bits width, trying to mark page table regions
higher than 39-bit will always return EFI_UNSUPPROTED.
This change updated the interface of SetStaticPageTable function to take
PhysicalAddressBits as an input parameter, in order to avoid changing/
accessing the global variable.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Fixes: 4eee0cc7cc
Signed-off-by: Kun Qin <kuqin12@gmail.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3283
Current SMM Save State routine does not check the number of bytes to be
read, when it comse to read IO_INFO, before casting the incoming buffer
to EFI_SMM_SAVE_STATE_IO_INFO. This could potentially cause memory
corruption due to extra bytes are written out of buffer boundary.
This change adds a width check before copying IoInfo into output buffer.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Kun Qin <kuqin12@gmail.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20210406195254.1018-2-kuqin12@gmail.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3199
When Token points to mSmmStartupThisApToken, this routine is called
from SmmStartupThisAp() in non-blocking mode due to
PcdCpuSmmBlockStartupThisAp == FALSE.
In this case, caller wants to startup AP procedure in non-blocking
mode and cannot get the completion status from the Token because there
is no way to return the Token to caller from SmmStartupThisAp().
Caller needs to use its specific way to query the completion status.
There is no need to allocate a token for such case so the 3 overheads
can be avoided:
1. Call AllocateTokenBuffer() when there is no free token.
2. Get a free token from the token buffer.
3. Call ReleaseToken() in APHandler().
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
In function InitGdt(), SmiPFHandler() and Gen4GPageTable(), it uses
CpuIndex * mSmmStackSize to get the SMM stack address offset for
multi processor. It misses the SMM Shadow Stack Size. Each processor
will use mSmmStackSize + mSmmShadowStackSize in the memory.
It should use CpuIndex * (mSmmStackSize + mSmmShadowStackSize) to get
this SMM stack address offset. If mSmmShadowStackSize > 0 and multi
processor enabled, it will get the wrong offset value.
CET shadow stack feature will set the value of mSmmShadowStackSize.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3237
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Roger Feng <roger.feng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
If CET shadows stack feature enabled in SMM and stack switch is enabled.
When code execute from SMM handler to SMM exception, CPU will check SMM
exception shadow stack token busy bit if it is cleared or not.
If it is set, it will trigger #DF exception.
If it is not set, CPU will set the busy bit when enter SMM exception.
So, the busy bit should be cleared when return back form SMM exception to
SMM handler. Otherwise, keeping busy bit 1 will cause to trigger #DF
exception when enter SMM exception next time.
So, we use instruction SAVEPREVSSP, CLRSSBSY and RSTORSSP to clear the
shadow stack token busy bit before RETF instruction in SMM exception.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3192
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Roger Feng <roger.feng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
This patch makes two refinements to reduce SMRAM consumption in CpuS3.c.
1. Only do CopyRegisterTable() when register table is not empty,
IsRegisterTableEmpty() is created to check whether the register table
is empty or not.
Take empty PreSmmInitRegisterTable as example, about 24K SMRAM consumption
could be reduced when mAcpiCpuData.NumberOfCpus=1024.
sizeof (CPU_REGISTER_TABLE) = 24
mAcpiCpuData.NumberOfCpus = 1024 = 1K
mAcpiCpuData.NumberOfCpus * sizeof (CPU_REGISTER_TABLE) = 24K
2. Only copy table entries buffer instead of whole buffer.
AllocatedSize in SourceRegisterTableList is the whole buffer size.
Actually, only the table entries buffer needs to be copied, and the size
is TableLength * sizeof (CPU_REGISTER_TABLE_ENTRY).
Take AllocatedSize=0x1000=4096, TableLength=100 and NumberOfCpus=1024 as example,
about 1696K SMRAM consumption could be reduced.
sizeof (CPU_REGISTER_TABLE_ENTRY) = 24
TableLength = 100
TableLength * sizeof (CPU_REGISTER_TABLE_ENTRY) = 2400
AllocatedSize = 0x1000 = 4096
AllocatedSize - TableLength * sizeof (CPU_REGISTER_TABLE_ENTRY) = 4096 - 2400 = 1696
NumberOfCpus = 1024 = 1K
NumberOfCpus * (AllocatedSize - TableLength * sizeof (CPU_REGISTER_TABLE_ENTRY)) = 1696K
This patch also corrects the CopyRegisterTable() function description.
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20210111015419.28368-1-star.zeng@intel.com>
Today's code assumes every core contains the same number of threads.
It's not always TRUE for certain model.
Such assumption causes system hang when thread count per core
is different and there is core or package dependency between CPU
features (using CPU_FEATURE_CORE_BEFORE/AFTER,
CPU_FEATURE_PACKAGE_BEFORE/AFTER).
The change removes such assumption by calculating the actual thread
count per package and per core.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Yun Lou <yun.lou@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
When trying to get page table base, if mInternalCr3 is zero, it will use
the page table from CR3, and reflect the page table depth by CR4 LA57 bit.
If mInternalCr3 is non zero, it will use the page table from mInternalCr3
and reflect the page table depth of mInternalCr3 at same time.
In the case of X64, we use m5LevelPagingNeeded to reflect the depth of
the page table. And in the case of IA32, it will not the page table depth
information.
This patch is a bug fix when enable CET feature with 5 level paging.
The SMM page tables are allocated / initialized in PiCpuSmmEntry().
When CET is enabled, PiCpuSmmEntry() must further modify the attribute of
shadow stack pages. This page table is not set to CR3 in PiCpuSmmEntry().
So the page table base address is set to mInternalCr3 for modifty the
page table attribute. It could not use CR4 LA57 bit to reflect the
page table depth for mInternalCr3.
So we create a architecture-specific implementation GetPageTable() with
2 output parameters. One parameter is used to output the page table
address. Another parameter is used to reflect if it is 5 level paging
or not.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3015
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Change the variable name from mInternalGr3 to mInternalCr3.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3015
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2198
Typically, an AP is booted using the INIT-SIPI-SIPI sequence. This
sequence is intercepted by the hypervisor, which sets the AP's registers
to the values requested by the sequence. At that point, the hypervisor can
start the AP, which will then begin execution at the appropriate location.
Under SEV-ES, AP booting presents some challenges since the hypervisor is
not allowed to alter the AP's register state. In this situation, we have
to distinguish between the AP's first boot and AP's subsequent boots.
First boot:
Once the AP's register state has been defined (which is before the guest
is first booted) it cannot be altered. Should the hypervisor attempt to
alter the register state, the change would be detected by the hardware
and the VMRUN instruction would fail. Given this, the first boot for the
AP is required to begin execution with this initial register state, which
is typically the reset vector. This prevents the BSP from directing the
AP startup location through the INIT-SIPI-SIPI sequence.
To work around this, the firmware will provide a build time reserved area
that can be used as the initial IP value. The hypervisor can extract this
location value by checking for the SEV-ES reset block GUID that must be
located 48-bytes from the end of the firmware. The format of the SEV-ES
reset block area is:
0x00 - 0x01 - SEV-ES Reset IP
0x02 - 0x03 - SEV-ES Reset CS Segment Base[31:16]
0x04 - 0x05 - Size of the SEV-ES reset block
0x06 - 0x15 - SEV-ES Reset Block GUID
(00f771de-1a7e-4fcb-890e-68c77e2fb44e)
The total size is 22 bytes. Any expansion to this block must be done
by adding new values before existing values.
The hypervisor will use the IP and CS values obtained from the SEV-ES
reset block to set as the AP's initial values. The CS Segment Base
represents the upper 16 bits of the CS segment base and must be left
shifted by 16 bits to form the complete CS segment base value.
Before booting the AP for the first time, the BSP must initialize the
SEV-ES reset area. This consists of programming a FAR JMP instruction
to the contents of a memory location that is also located in the SEV-ES
reset area. The BSP must program the IP and CS values for the FAR JMP
based on values drived from the INIT-SIPI-SIPI sequence.
Subsequent boots:
Again, the hypervisor cannot alter the AP register state, so a method is
required to take the AP out of halt state and redirect it to the desired
IP location. If it is determined that the AP is running in an SEV-ES
guest, then instead of calling CpuSleep(), a VMGEXIT is issued with the
AP Reset Hold exit code (0x80000004). The hypervisor will put the AP in
a halt state, waiting for an INIT-SIPI-SIPI sequence. Once the sequence
is recognized, the hypervisor will resume the AP. At this point the AP
must transition from the current 64-bit long mode down to 16-bit real
mode and begin executing at the derived location from the INIT-SIPI-SIPI
sequence.
Another change is around the area of obtaining the (x2)APIC ID during AP
startup. During AP startup, the AP can't take a #VC exception before the
AP has established a stack. However, the AP stack is set by using the
(x2)APIC ID, which is obtained through CPUID instructions. A CPUID
instruction will cause a #VC, so a different method must be used. The
GHCB protocol supports a method to obtain CPUID information from the
hypervisor through the GHCB MSR. This method does not require a stack,
so it is used to obtain the necessary CPUID information to determine the
(x2)APIC ID.
The new 16-bit protected mode GDT entry is used in order to transition
from 64-bit long mode down to 16-bit real mode.
A new assembler routine is created that takes the AP from 64-bit long mode
to 16-bit real mode. This is located under 1MB in memory and transitions
from 64-bit long mode to 32-bit compatibility mode to 16-bit protected
mode and finally 16-bit real mode.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Most busy waits (spinlocks) in "UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c"
already call CpuPause() in their loop bodies; see SmmWaitForApArrival(),
APHandler(), and SmiRendezvous(). However, the "main wait" within
APHandler():
> //
> // Wait for something to happen
> //
> WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
doesn't do so, as WaitForSemaphore() keeps trying to acquire the semaphore
without pausing.
The performance impact is especially notable in QEMU/KVM + OVMF
virtualization with CPU overcommit (that is, when the guest has
significantly more VCPUs than the host has physical CPUs). The guest BSP
is working heavily in:
BSPHandler() [MpService.c]
PerformRemainingTasks() [PiSmmCpuDxeSmm.c]
SetUefiMemMapAttributes() [SmmCpuMemoryManagement.c]
while the many guest APs are spinning in the "Wait for something to
happen" semaphore acquisition, in APHandler(). The guest APs are
generating useless memory traffic and saturating host CPUs, hindering the
guest BSP's progress in SetUefiMemMapAttributes().
Rework the loop in WaitForSemaphore(): call CpuPause() in every iteration
after the first check fails. Due to Pause Loop Exiting (known as Pause
Filter on AMD), the host scheduler can favor the guest BSP over the guest
APs.
Running a 16 GB RAM + 512 VCPU guest on a 448 PCPU host, this patch
reduces OVMF boot time (counted until reaching grub) from 20-30 minutes to
less than 4 minutes.
The patch should benefit physical machines as well -- according to the
Intel SDM, PAUSE "Improves the performance of spin-wait loops". Adding
PAUSE to the generic WaitForSemaphore() function is considered a general
improvement.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1861718
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20200729185217.10084-1-lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
AMD does not support MSR_IA32_MISC_ENABLE. Accessing that register
causes and exception on AMD processors. If Execution Disable is
supported, but if the processor is an AMD processor, skip manipulating
MSR_IA32_MISC_ENABLE[34] XD Disable bit.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Garrett Kirkendall <garrett.kirkendall@amd.com>
Message-Id: <20200622131825.1352-5-Garrett.Kirkendall@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2388
After remove Used parameter, below code in ResetTokens can also be
removed:
1. The RunningApCount parameter will be reset in GetFreeToken.
2. The ReleaseSpinLock should be called in ReleaseToken function,
Code in this function seems like a later fix if ReleaseToken not
Release it. We should remove code here and fix the real issue if
existed.
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2388
After patch "UefiCpuPkg/PiSmmCpuDxeSmm: Improve the
performance of GetFreeToken()" which adds new parameter
FirstFreeToken, it's not need to use Uses parameter.
This patch used to remove this parameter.
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Today's GetFreeToken() runs at the algorithm complexity of O(n)
where n is the size of the token list.
The change introduces a new global variable FirstFreeToken and it
always points to the first free token. So the algorithm complexity
of GetFreeToken() decreases from O(n) to O(1).
The improvement matters when some SMI code uses StartupThisAP()
service for each of the AP such that the algorithm complexity
becomes O(n) * O(m) where m is the AP count.
As next steps,
1. PROCEDURE_TOKEN.Used field can be optimized out because
all tokens before FirstFreeToken should have "Used" set while all
after FirstFreeToken should have "Used" cleared.
2. ResetTokens() can be optimized to only reset tokens before
FirstFreeToken.
v2: add missing line in InitializeDataForMmMp.
v3: update copyright year to 2020.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
The "ACPI_CPU_DATA.NumberOfCpus" field is specified as follows, in
"UefiCpuPkg/Include/AcpiCpuData.h" (rewrapped for this commit message):
//
// The number of CPUs. If a platform does not support hot plug CPUs,
// then this is the number of CPUs detected when the platform is booted,
// regardless of being enabled or disabled. If a platform does support
// hot plug CPUs, then this is the maximum number of CPUs that the
// platform supports.
//
The InitializeCpuBeforeRebase() and InitializeCpuAfterRebase() functions
in "UefiCpuPkg/PiSmmCpuDxeSmm/CpuS3.c" try to restore CPU configuration on
the S3 Resume path for *all* CPUs accounted for in
"ACPI_CPU_DATA.NumberOfCpus". This is wrong, as with CPU hotplug, not all
of the possible CPUs may be present at the time of S3 Suspend / Resume.
The symptom is an infinite wait.
Instead, the "mNumberOfCpus" variable should be used, which is properly
maintained through the EFI_SMM_CPU_SERVICE_PROTOCOL implementation (see
SmmAddProcessor(), SmmRemoveProcessor(), SmmCpuUpdate() in
"UefiCpuPkg/PiSmmCpuDxeSmm/CpuService.c").
When CPU hotplug is disabled, "mNumberOfCpus" is constant, and equals
"ACPI_CPU_DATA.NumberOfCpus" at all times.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
Cc: Ray Ni <ray.ni@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1512
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20200226221156.29589-3-lersek@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
[lersek@redhat.com: shut up UINTN->UINT32 warning from Windows VS2019 PR]
Fix various typos in comments and documentation.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Antoine Coeur <coeur@gmx.fr>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Signed-off-by: Philippe Mathieu-Daude <philmd@redhat.com>
Message-Id: <20200207010831.9046-78-philmd@redhat.com>
In commit 4eee0cc7cc ("UefiCpuPkg/PiSmmCpu: Enable 5 level paging when
CPU supports", 2019-07-12), the Page Directory Entry setting was regressed
(corrupted) when splitting a 2MB page to 512 4KB pages, in the
InitPaging() function.
Consider the following hunk, displayed with
$ git show --function-context --ignore-space-change 4eee0cc7cc
> //
> // If it is 2M page, check IsAddressSplit()
> //
> if (((*Pd & IA32_PG_PS) != 0) && IsAddressSplit (Address)) {
> //
> // Based on current page table, create 4KB page table for split area.
> //
> ASSERT (Address == (*Pd & PHYSICAL_ADDRESS_MASK));
>
> Pt = AllocatePageTableMemory (1);
> ASSERT (Pt != NULL);
>
> + *Pd = (UINTN) Pt | IA32_PG_RW | IA32_PG_P;
> +
> // Split it
> - for (PtIndex = 0; PtIndex < SIZE_4KB / sizeof(*Pt); PtIndex++) {
> - Pt[PtIndex] = Address + ((PtIndex << 12) | mAddressEncMask | PAGE_ATTRIBUTE_BITS);
> + for (PtIndex = 0; PtIndex < SIZE_4KB / sizeof(*Pt); PtIndex++, Pt++) {
> + *Pt = Address + ((PtIndex << 12) | mAddressEncMask | PAGE_ATTRIBUTE_BITS);
> } // end for PT
> *Pd = (UINT64)(UINTN)Pt | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> } // end if IsAddressSplit
> } // end for PD
First, the new assignment to the Page Directory Entry (*Pd) is
superfluous. That's because (a) we set (*Pd) after the Page Table Entry
loop anyway, and (b) here we do not attempt to access the memory starting
at "Address" (which is mapped by the original value of the Page Directory
Entry).
Second, appending "Pt++" to the incrementing expression of the PTE loop is
a bug. It causes "Pt" to point *right past* the just-allocated Page Table,
once we finish the loop. But the PDE assignment that immediately follows
the loop assumes that "Pt" still points to the *start* of the new Page
Table.
The result is that the originally mapped 2MB page disappears from the
processor's view. The PDE now points to a "Page Table" that is filled with
garbage. The random entries in that "Page Table" will cause some virtual
addresses in the original 2MB area to fault. Other virtual addresses in
the same range will no longer have a 1:1 physical mapping, but be
scattered over random physical page frames.
The second phase of the InitPaging() function ("Go through page table and
set several page table entries to absent or execute-disable") already
manipulates entries in wrong Page Tables, for such PDEs that got split in
the first phase.
This issue has been caught as follows:
- OVMF is started with 2001 MB of guest RAM.
- This places the main SMRAM window at 0x7C10_1000.
- The SMRAM management in the SMM Core links this SMRAM window into
"mSmmMemoryMap", with a FREE_PAGE_LIST record placed at the start of the
area.
- At "SMM Ready To Lock" time, PiSmmCpuDxeSmm calls InitPaging(). The
first phase (quoted above) decides to split the 2MB page at 0x7C00_0000
into 512 4KB pages, and corrupts the PDE. The new Page Table is
allocated at 0x7CE0_D000, but the PDE is set to 0x7CE0_E000 (plus
attributes 0x67).
- Due to the corrupted PDE, the second phase of InitPaging() already looks
up the PTE for Address=0x7C10_1000 in the wrong place. The second phase
goes on to mark bogus PTEs as "NX".
- PiSmmCpuDxeSmm calls SetMemMapAttributes(). Address 0x7C10_1000 is at
the base of the SMRAM window, therefore it happens to be listed in the
SMRAM map as an EfiConventionalMemory region. SetMemMapAttributes()
calls SmmSetMemoryAttributes() to mark the region as XP. However,
GetPageTableEntry() in ConvertMemoryPageAttributes() fails -- address
0x7C10_1000 is no longer mapped by anything! -- and so the attribute
setting fails with RETURN_UNSUPPORTED. This error goes unnoticed, as
SetMemMapAttributes() ignores the return value of
SmmSetMemoryAttributes().
- When SetMemMapAttributes() reaches another entry in the SMRAM map,
ConvertMemoryPageAttributes() decides it needs to split a 2MB page, and
calls SplitPage().
- SplitPage() calls AllocatePageTableMemory() for the new Page Table,
which takes us to InternalAllocMaxAddress() in the SMM Core.
- The SMM core attempts to read the FREE_PAGE_LIST record at 0x7C10_1000.
Because this virtual address is no longer mapped, the firmware crashes
in InternalAllocMaxAddress(), when accessing (Pages->NumberOfPages).
Remove the useless assignment to (*Pd) from before the loop. Revert the
loop incrementing and the PTE assignment to the known good version.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1789335
Fixes: 4eee0cc7cc
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2434
Current code use below loops to enumerate the CPUs:
for (Index = mMaxNumberOfCpus; Index-- > 0;) {
it has no issue but not easy for the developers to read the code.
Update above code to below style,
for (Index = 0; Index < mMaxNumberOfCpus; Index++) {
It make the developers easy to read and consistent with other
similar cases in this driver.
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Eric Dong <eric.dong@intel.com>
This reverts commit 123b720eeb.
The commit message for commit 123b720eeb is not correct.
Cc: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2388
Token is new introduced by MM MP Protocol. Current logic allocate Token
every time when need to use it. The logic caused SMI latency raised to
very high. Update logic to allocate Token buffer at driver's entry point.
Later use the token from the allocated token buffer. Only when all the
buffer have been used, then need to allocate new buffer.
Former change (9caaa79dd7) missed
PROCEDURE_TOKEN part, this change covers it.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Eric Dong <eric.dong@intel.com>
The size for the array of mSmmMpSyncData->CpuData[] is 0 ~
mMaxNumberOfCpus -1. But current code may use
mSmmMpSyncData->CpuData[mMaxNumberOfCpus].
This patch fixed this issue.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2268
In current implementation, when check whether APs called by StartUpAllAPs
or StartUpThisAp, it checks the Tokens value used by other APs. Also the AP
will update the Token value for itself if its task finished. In this
case, the potential race condition issues happens for the tokens.
Because of this, system may trig ASSERT during cycling test.
This change enhance the code logic, add new attributes for the token to
remove the reference for the tokens belongs to other APs.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2388
Token is new introduced by MM MP Protocol. Current logic allocate Token
every time when need to use it. The logic caused SMI latency raised to
very high. Update logic to allocate Token buffer at driver's entry point.
Later use the token from the allocated token buffer. Only when all the
buffer have been used, then need to allocate new buffer.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
In MpLib.c, remove the white space on a new line.
In PageTbl.c and PiSmmCpuDxeSmm.h, update the comment style.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Due to needs a tackling the deficiency of the AP API, it's necessary to
ensure that in non-blocking mode previous AP executed command is
finished before starting new one.
To remedy above:
1) execute AcquireSpinLock instead AcquireSpinLockOrFail - this will
ensure time "window" to eliminate potential race condition between
BSP and AP spinLock release in non-blocking mode.
This also will eliminate possibility to start executing new AP
function before last is finished.
2) remove returns EFI_STATUS - EFI_NOT_READY - in new scenario returned
status is not necessary to caller.
Signed-off-by: Damian Nikodem <damian.nikodem@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Benjamin You <benjamin.you@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Krzysztof Rusocki <krzysztof.rusocki@intel.com>
Today's behavior is to enable 5l paging when CPU supports it
(CPUID[7,0].ECX.BIT[16] is set).
The patch changes the behavior to enable 5l paging when two
conditions are both met:
1. CPU supports it;
2. The max physical address bits is bigger than 48.
Because 4-level paging can support to address physical address up to
2^48 - 1, there is no need to enable 5-level paging with max
physical address bits <= 48.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Today's behavior is to always restrict access to non-SMRAM regardless
the value of PcdCpuSmmRestrictedMemoryAccess.
Because RAS components require to access all non-SMRAM memory, the
patch changes the code logic to honor PcdCpuSmmRestrictedMemoryAccess
so that only when the PCD is true, the restriction takes affect and
page table memory is also protected.
Because IA32 build doesn't reference this PCD, such restriction
always takes affect in IA32 build.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
The patch changes PiSmmCpu driver to consume PCD
PcdCpuSmmRestrictedMemoryAccess.
Because the behavior controlled by PcdCpuSmmStaticPageTable in
original code is not changed after switching to
PcdCpuSmmRestrictedMemoryAccess.
The functionality is not impacted by this patch.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2040
Supports new logic which test current value before write new value.
Only write new value when current value not same as new value.
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reclaim may free page table pages that are required to handle current page
fault. This causes a page leak, and, after sufficent number of specific
page fault+reclaim pairs, we run out of reclaimable pages and hit:
ASSERT (MinAcc != (UINT64)-1);
To remedy, prevent pages essential to handling current page fault:
(1) from being considered as reclaim candidates (first reclaim phase)
(2) from being freed as part of "branch cleanup" (second reclaim phase)
Signed-off-by: Damian Nikodem <damian.nikodem@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Krzysztof Rusocki <krzysztof.rusocki@intel.com>
1. Update CPUStatus to CpuStatus in comments to align comments
with code.
2. Add "OUT" attribute for "ProcedureArguments" parameter in function
header.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>