BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
When running under an SVSM, the VMPL level of the APs that are started
must match the VMPL level provided by the SVSM. Additionally, each AP
must have a Calling Area for use with the SVSM protocol. Update the AP
creation to properly support running under an SVSM.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Acked-by: Ray Ni <ray.ni@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
The RMPADJUST instruction is used to change the VMSA attribute of a page,
but the VMSA attribute can only be changed when running at VMPL0. To
prepare for running at a less priviledged VMPL, use the AmdSvsmLib library
API to perform the RMPADJUST. The AmdSvsmLib library will perform the
proper operation on behalf of the caller.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Ray Ni <ray.ni@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
In order to support an SEV-SNP guest running under an SVSM at VMPL1 or
lower, a new library must be created.
This library includes an interface to detect if running under an SVSM, an
interface to return the current VMPL, an interface to perform memory
validation and an interface to set or clear the attribute that allows a
page to be used as a VMSA.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Ray Ni <ray.ni@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
Currently, the first time an AP is started for an SEV-SNP guest, it relies
on the VMSA as set by the hypervisor. If the list of APIC IDs has been
retrieved, this is not necessary. The list of APIC IDs will be identified
by a GUIDed HOB. If the GUIDed HOB is present, use the SEV-SNP AP Create
protocol to start the AP for the first time and each time thereafter.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Ray Ni <ray.ni@intel.com>
LoongArch64 requires CpuMmio2Dxe, add it into LoongArch64 field.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Reviewed-by: Ray Ni <ray.ni@intel.com>
On a multi-processor system, if the BSP dose not know how many APs are
online or cannot wake up the AP via broadcast, it can collect AP
resouces before wakeing up the AP and add a new HOB to save the
processor resouces.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Add a new base library named CpuMmuLib and add a LoongArch64 instance
with in the library.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4734
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Co-authored-by: Baoqi Zhang <zhangbaoqi@loongson.cn>
Co-authored-by: Dongyan Qian <qiandongyan@loongson.cn>
Co-authored-by: Xianglai Li <lixianglai@loongson.cn>
Co-authored-by: Bibo Mao <maobibo@loongson.cn>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ray Ni <ray.ni@intel.com>
Added PcdLoongArchExceptionVectorBaseAddress use for storing the CPU
exception vector base address. This PCD can be populated at build time
or changed at runtime, and is used only by LoongArch.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4734
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Add a new header file CpuMmuLib.h, whitch is referenced from
ArmPkg/Include/Library/ArmMmuLib.h. Currently, only support for
LoongArch64 is added, and more architectures can be accommodated in the
future.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4734
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Add the LoongArch64 CPU Timer instance to CpuTimerLib, using CPUCFG 0x4
and 0x5 for Stable Counter frequency.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4734
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Some of the order is not in alphabetical, reorder.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4726
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Some of the order is not in alphabetical, reorder.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4726
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Some of the order is not in alphabetical, reorder.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4726
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Some of the order is not in alphabetical, reorder.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4726
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
The GCD EFI_MEMORY_UC and EFI_MEMORY_WC memory attributes will be
supported when Svpbmt extension available.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Tuan Phan <tphan@ventanamicro.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
While UINTN defined for RISC-V 64 bits is UINT64, explictly using UINT64
for those variables that clearly are UINT64.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Tuan Phan <tphan@ventanamicro.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
Add volatile qualifier to page table related variable to prevent
compiler from optimizing away the variables which may lead to
unexpected result.
Signed-off-by: Zhou Jianfeng <jianfeng.zhou@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Pedro Falcato <pedro.falcato@gmail.com>
Cc: Zhang Di <di.zhang@intel.com>
Cc: Tan Dun <dun.tan@intel.com>
Cc: Michael Brown <mcb30@ipxe.org>
Message-Id: <20240301025447.41170-1-jianfeng.zhou@intel.com>
Reviewed-by: Michael Brown <mcb30@ipxe.org>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
[lersek@redhat.com: reconstruct commit manually, from corrupt patch email
on-list]
Some IN OUT parameters in CpuPageTableMap.c were mistakenly marked as IN.
"IN" replaced with "IN OUT" in the following interfaces:
PageTableLibSetPte4K(): Pte4K
PageTableLibSetPleB(): PleB
PageTableLibSetPle(): Ple
PageTableLibSetPnle(): Pnle
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhou Jianfeng <jianfeng.zhou@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20240222023922.29275-1-jianfeng.zhou@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Move the WaitLoopExecutionMode and StartupSignalValue fields to a
separate HOB with the new struct.
WaitLoopExecutionMode and StartupSignalValue are independent of
processor index ranges; they are global to MpInitLib (i.e., the entire
system). Therefore they shouldn't be repeated in every MpHandOff GUID
HOB.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20240228114855.1615788-1-kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Oliver Steffen <osteffen@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
[lersek@redhat.com: turn the "Cc:" message headers from Gerd's on-list
posting into "Cc:" tags in the commit message, in order to pacify
"PatchCheck.py"]
After finding the BSP Number return the result instead of
continuing to loop over the remaining processors.
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20240222160106.686484-7-kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
[lersek@redhat.com: s/ASSERT (FALSE)/ASSERT_EFI_ERROR (EFI_NOT_FOUND)/ [Ray]]
Add support for splitting Hand-Off data into multiple HOBs.
This is required for VMs with thousands of CPUs.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20240222160106.686484-6-kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
[lersek@redhat.com: define one local variable per line [Ray]]
Loop over all MP_HAND_OFF HOBs instead of expecting a single HOB
covering all CPUs in the system.
Add a new FirstMpHandOff variable, which caches the first HOB body for
faster lookups. It is also used to check whenever MP_HAND_OFF HOBs are
present. Using the MpHandOff pointer for that does not work any more
because the variable will be NULL at the end of HOB loops.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Message-Id: <20240222160106.686484-5-kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Rename the MpHandOff parameter to FirstMpHandOff. Add loops so the
function inspects all HOBs present in the system.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20240222160106.686484-4-kraxel@redhat.com>
Rename the MpHandOff parameter to FirstMpHandOff. Add a loop so the
function inspects all HOBs present in the system.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20240222160106.686484-3-kraxel@redhat.com>
Rename the function to GetNextMpHandOffHob(), add MP_HAND_OFF parameter.
When called with NULL pointer return the body of the first HOB, otherwise
return the next in the chain.
Also add the function prototype to the MpLib.h header file.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20240222160106.686484-2-kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4614
About the IsModified, current function doesn't consider that hardware
also may change the pagetable. The issue is that in the first call of
internal function PageTableLibMapInLevel, the function assume page
table is not changed, and add ASSERT to check. But hardware may change
the page table, which cause the ASSERT happens.
Fix the issue by adding addtional condition to only check if the page
table is changed when the software want to modify the page table.
Also, add more comment to explain this behavior.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Crystal Lee <CrystalLee@ami.com.tw>
Cc: Pedro Falcato <pedro.falcato@gmail.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
The purpose of writing CR3 in ConvertMemoryPageToNotPresent is just
to flush TLB, because CR3 won't be changed in function
ConvertMemoryPageToNotPresent.
After ConvertMemoryPageToNotPresent, there is always a flush TLB
function. Also, because ConvertMemoryPageToNotPresent in called in a
loop, to improve performance, there is no need to flush TLB
inside ConvertMemoryPageToNotPresent. Just flushing TLB after the loop
is enough.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
PageTableMap() only modifies the PageTable root pointer when creating from zero.
Explicitly explain it in function header.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
This patch is to check BspIndex first before lock cmpxchg operation.
If BspIndex has not been set, then do the lock cmpxchg, otherwise,
the APs don't need to lock cmpxchg the BspIndex value since the BSP
election has been done. It's the optimization to lower the resource
contention caused by the atomic compare exchange operation, so as to
improve the SMI performance for BSP election.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Kinney Michael D <michael.d.kinney@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Use MAX_UINT32 directly instead of typecasting from signed
to unsigned value.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Kinney Michael D <michael.d.kinney@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=4682
Fixes: 725acd0b9c
Before commit 725acd0b9c ("UefiCpuPkg: Avoid assuming only one
smmbasehob", 2023-12-12), PiCpuSmmEntry() used to look up
"gSmmBaseHobGuid", and allocate "mCpuHotPlugData.SmBase" regardless of the
GUID's presence:
> - mCpuHotPlugData.SmBase = (UINTN *)AllocatePool (sizeof (UINTN) * mMaxNumberOfCpus);
> - ASSERT (mCpuHotPlugData.SmBase != NULL);
After commit 725acd0b9c, PiCpuSmmEntry() -> GetSmBase() would allocate
"mCpuHotPlugData.SmBase" only on the success path, and no allocation would
be performed on *any* of the error paths.
This caused a problem: if "mCpuHotPlugData.SmBase" was left NULL because
the GUID HOB was missing, PiCpuSmmEntry() would still be supposed to
allocate "mCpuHotPlugData.SmBase", just like earlier. However, because
commit 725acd0b9c conflated the two possible error modes (out of SMRAM,
and no GUID HOB), PiCpuSmmEntry() could not decide whether it should
allocate "mCpuHotPlugData.SmBase", or not. Currently, we never allocate if
GetSmBase() fails -- for any reason --, which means that on platforms that
don't produce the GUID HOB, "mCpuHotPlugData.SmBase" is left NULL, leading
to null pointer dereferences later, in PiCpuSmmEntry().
Now that a prior patch in the series distinguishes the two error modes
from each other, we can tell exactly when the GUID HOB is not found, and
reinstate the earlier "mCpuHotPlugData.SmBase" allocation for that case.
(With an actual error check thrown in, in addition to the original
"assertion".)
Cc: Dun Tan <dun.tan@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Reported-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=4682
Commit 725acd0b9c ("UefiCpuPkg: Avoid assuming only one smmbasehob",
2023-12-12) introduced a helper function called GetSmBase(), replacing
the lookup of the first and only "gSmmBaseHobGuid" GUID HOB and
unconditional "mCpuHotPlugData.SmBase" allocation, with iterated lookups
plus conditional memory allocation.
This introduced a new failure mode for setting "mCpuHotPlugData.SmBase".
Namely, before commit 725acd0b9c, "mCpuHotPlugData.SmBase" would be
allocated regardless of the GUID HOB being absent. After the commit,
"mCpuHotPlugData.SmBase" could remain NULL if the GUID HOB was absent,
*or* one of the memory allocations inside GetSmBase() failed; and in the
former case, we'd even proceed to the rest of PiCpuSmmEntry().
In relation to this conflation of distinct failure modes, commit
725acd0b9c actually introduced a NULL pointer dereference. Namely, a
NULL "mCpuHotPlugData.SmBase" is not handled properly at all now. We're
going to fix that NULL pointer dereference in a subsequent patch; however,
as a pre-requisite for that we need to tell apart the failure modes of
GetSmBase().
For memory allocation failures, return EFI_OUT_OF_RESOURCES. Move the
"assertion" that SMRAM cannot be exhausted happen out to the caller
(PiCpuSmmEntry()). Strengthen the assertion by adding an explicit
CpuDeadLoop() call. (Note: GetSmBase() *already* calls CpuDeadLoop() if
(NumberOfProcessors != MaxNumberOfCpus).)
For the absence of the GUID HOB, return EFI_NOT_FOUND.
For good measure, make GetSmBase() STATIC (it should have been STATIC from
the start).
This is just a refactoring, no behavioral difference is intended (beyond
the explicit CpuDeadLoop() upon SMRAM exhaustion).
Cc: Dun Tan <dun.tan@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
CpuIo2Dxe only supports IO to access to CPU IO. Some ARCHs that do not
implement ports for CPU IO require MMIO to access PCI IO, and they
pretty much put the IO devices under the LPC bus, which is usually under
the PCIe/PCI bus. CpuMmio2Dxe was added to meet these needs.
CpuMmio2Dxe depends on PcdPciIoTranslation. The code is copied from
ArmPkg.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4584
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Reviewed-by: Ray Ni <ray.ni@intel.com>
This patch is to map SMRAM in 4K page granularity
during SMM page table initialization(SmmInitPageTable)
so as to avoid the SMRAM paging-structure layout
change when SMI happens (PerformRemainingTasks).
The reason is to avoid the Paging-Structure change
impact to the multiple Processors. Refer SDM section
"4.10.4" & "4.10.5".
Currently, SMM BSP needs to update the SMRAM range
paging attribute in smm page table according to the
SmmMemoryAttributesTable when SMM ready to lock
happens. If the SMRAM range is not 4k mapped in page
table, the page table update process may split 1G/2M
paging entries to 4k ones.Meanwhile, all APs are still
running in SMI, which might access the affected
linear-address range between the time of modification
and the time of invalidation access. That will be
a potential problem leading exception happens.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Add more Paging mode enumeration in CpuPageTableLib
to support forced mapping a range in 4K page
granularity.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
This commit is to reduce and optimize access to
attribute in CpuPageTableLib.
Unreasonable writing to attribute of page table may
leads to expection.
The assembly code for C code Pnle->Bits.Present =
Attribute->Bits.Present looks like:
and dword [rcx], 0xfffffffe
and eax, 0x1
or [rcx], eax
In case Pnle->Bits.Present and Attribute->Bits.Present
is 1, Pnle->Bits.Present will be set to 0 for short
time(2 instructions) which is unexpected. If some other
core is accessing the page, it may leads to expection.
This change reduce and optimize access to attribute of
page table, attribute of page table is set only when it
need to be changed.
Signed-off-by: Zhou Jianfeng <jianfeng.zhou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
With CMO operations available for RISC-V, utilize them in CPU
Architecture protocol.
Signed-off-by: Dhaval Sharma <dhaval@rivosinc.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sunil VL <sunilvl@ventanamicro.com>
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
This patch adds support for AMD's new extended topology.
If processor supports CPUID 80000026 leaf then obtain
the topology information using new method.
Algorithm:
if CPUID is AMD:
then
check for AMD's extended cpu tology leaf.
if yes
then extract cpu tology based on
AMD programmer manual's instruction.
else
then fallback to existing topology function.
endif
endif
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Abdul Lateef Attar <AbdulLateef.Attar@amd.com>
Message-Id: <d93822d37fd25dafd32795758cf47263b432e102.1705549445.git.AbdulLateef.Attar@amd.com>
Acked-by: Ray Ni <ray.ni@intel.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
Change name of gMpInformationHobGuid2 to
gMpInformation2HobGuid. It's to align with
the file name MpInformation2.h and the
structure name MP_INFORMATION2_HOB_DATA.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
When creating smm page table, limit maximum
supported physical addresses bits returned by
CalculateMaximumSupportAddress() to 47 if
5-Level Paging is disabled.
This commit is to avoid issue that more than
47-bit physical addresses are requested in smm
page table when 5-level paging is disabled.
4-level paging supports translating 48-bit
linear addresses to 52-bit physical addresses.
Since linear addresses are sign-extended,
linear-address space of 4-level paging is:
[0, 2^47-1] and
[0xffff8000_00000000, 0xffffffff_ffffffff].
So only [0, 2^47-1] linear-address range maps
to the identical physical-address range when
5-Level paging is disabled.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
PatchSmmSaveStateMap patches the SMM entry (code) and SmmSaveState
region (data) for each core, which can be improved to flush TLB once
after all the memory entries have been patched.
FlushTlbForAll flushes TLB for each core in serial, which can be
improved to flush TLB in parallel.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Signed-off-by: Zhi Jin <zhi.jin@intel.com>
Sstc extension allows to program the timer and receive the interrupt
without using an SBI call. This reduces the latency to generate the timer
interrupt. So, detect whether Sstc extension is supported and use the
stimecmp register directly to program the timer interrupt.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
Reviewed-by: Dhaval Sharma <dhaval@rivosinc.com>
Check lower 24 bits of ProcessorNumber instead of
the value of ProcessorNumber in the API
MpInitLibGetProcessorInfo() of MpInitLibUp instance.
Lower 24 bits of ProcessorNumber contains the actual
processor number.
The BIT24 of input ProcessorNumber might be set to
indicate if the EXTENDED_PROCESSOR_INFORMATION will
be retrived.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Min Xu <min.m.xu@intel.com>
Message-Id: <20240108050804.1718-3-dun.tan@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Set EXTENDED_PROCESSOR_INFORMATION to 0 in API
MpInitLibGetProcessorInfo() of MpInitLibUp. This
commit use ZeroMem() to set all fileds in output
EFI_PROCESSOR_INFORMATION to 0 before StatusFlag
field is reassigned.
Previously EXTENDED_PROCESSOR_INFORMATION in the API
MpInitLibGetProcessorInfo() of MpInitLibUp is ignored.
In PEI/DXE MpInitLib, EXTENDED_PROCESSOR_INFORMATION
will be retrived when BIT24 of input ProcessorNumber
is set. This commit can avoid garbage in the output
structure in MpInitLibGetProcessorInfo() of MpInitLibUp.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Min Xu <min.m.xu@intel.com>
Message-Id: <20240108050804.1718-2-dun.tan@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Parallel run the function GetStackBase for all APs for better
performance.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Daoxiang Li <daoxiang.li@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
After BSP returned from SmmCoreEntry, there are several rounds BSP
and AP sync in BSP handler:
1 .ReleaseAllAPs(); /// Notify all APs to exit.
if (SmmCpuFeaturesNeedConfigureMtrrs()) {
2. SmmCpuSyncWaitForAPs(); /// Wait for all APs to program MTRRs.
3. ReleaseAllAPs(); /// Signal APs to restore MTRRs.
}
4. SmmCpuSyncWaitForAPs(); /// Wait for all APs to complete pending
tasks including MTRR.
5. ReleaseAllAPs(); /// Signal APs to Reset states.
6. SmmCpuSyncWaitForAPs(); /// Gather APs to exit SMM synchronously.
Before step 6 and after step 5, BSP performs below items:
A. InitializeDebugAgent() /// Stop source level debug.
B. SmmCpuUpdate() /// Perform pending operations for hot-plug.
C. Present = FALSE; /// Clear the Present flag of BSP.
For InitializeDebugAgent(), BSP needs to wait all APs complete
pending tasks and then notify all APs to stop source level debug.
So, above step 4 & step 5 are required for InitializeDebugAgent().
For SmmCpuUpdate(), it's to perform pending operations for
hot-plug CPUs take effect in next SMI. Existing APs in SMI do not
reply on the CPU switch & hot-add & hot-remove operations. So, no
need step 4 and step 5 for additional one round BSP & AP sync.
Step 6 can make sure all APs are ready to exit SMM, then hot-plug
operation can take effect in next SMI.
For BSP "Present" flag, AP does not reply on it. No need step 4
and step 5 for additional one round BSP & AP sync.
Based on above analysis, step 4 and step 5 are only required if
need configure MTRR and support SMM source level debug. So, we can
reduce one round BSP and AP sync if both are unsupported. With
this change, SMI performance can be improved.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@Intel.com>