Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h and remove
extern for mSmmShadowStackSize in c files to simplify code.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Clear CR0.WP before modify smm page table. Currently, there is
an assumption that smm pagetable is always RW before ReadyToLock.
However, when AMD SEV is enabled, FvbServicesSmm driver calls
MemEncryptSevClearMmioPageEncMask to clear AddressEncMask bit
in smm page table for this range:
[PcdOvmfFdBaseAddress,PcdOvmfFdBaseAddress+PcdOvmfFirmwareFdSize]
If page slpit happens in this process, new memory for smm page
table is allocated. Then the newly allocated page table memory
is marked as RO in smm page table in this FvbServicesSmm driver,
which may lead to PF if smm code doesn't clear CR0.WP before
modify smm page table when ReadyToLock.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Add two functions to disable/enable CR0.WP. These two unctions
will also be used in later commits. This commit doesn't change any
functionality.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
In PiSmmCpuDxeSmm code, SetMemMapAttributes() marks memory ranges
in SmmMemoryAttributesTable to RO/NX. There may exist non-present
range in these memory ranges. Set other attributes for a non-present
range is not permitted in CpuPageTableMapLib. So add code to handle
this case. Only map the present ranges in SmmMemoryAttributesTable
to RO or NX.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
In ConvertMemoryPageAttributes() function, when clear RP for a
specific range [BaseAddress, BaseAddress + Length], it means to
set the present bit to 1 and assign default value for other
attributes in page table. The default attributes for the input
specific range are NX disabled and ReadOnly. If there is existing
present range in [BaseAddress, BaseAddress + Length] and the
attributes are not NX disabled or not ReadOnly, then output the
DEBUG message to indicate that the NX and ReadOnly attributes of
the existing present range are modified in the function.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Simplify the ConvertMemoryPageAttributes API to convert paging
attribute by CpuPageTableLib. In the new API, it calls
PageTableMap() to update the page attributes of a memory range.
With the PageTableMap() API in CpuPageTableLib, we can remove
the complicated page table manipulating code.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Because it's simpler for a platform to include the ResetVector source
and having pre-built binaries add burdens of updating the pre-built
binaries. This patch removes the pre-built binaries and the script
that buids the pre-built binaries.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
ResetVector assembly implementation puts "ALIGN 16" in the end
to guarantee the final executable file size is multiple of 16 bytes.
Because the module uses a special GUID which guarantees it's put in
the very end of a FV, which should be also the end of the FD.
All of these (file size is multiple of 16B, and the module is put at
end of FV, FV is put at end of FD) guarantee the "JMP xxx" instruction
is at FFFF_FFF0h.
This patch updates INF file and ReadMe.txt to add guidance of FDF ffs
rule for the ResetVector.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Since ResetVector source module shares the same GUID as the binary
module, the binary INF file is just removed from DSC.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
When a platform has lots of CPU cores/threads, perf-logging on every
AP produces lots of records. When this multiplies with number of SMIs
during post, the records are even more.
So, this patch adds a new PCD PcdSmmApPerfLogEnable (default TRUE)
to allow platform to turn off perf-logging on APs.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
MP procedures are those procedures that run in every CPU thread.
The EDKII perf infra is not MP safe so it doesn't support to be called
from those MP procedures.
The patch adds SMM MP perf-logging support in SmmMpPerf.c.
The following procedures are perf-logged:
* SmmInitHandler
* SmmCpuFeaturesRendezvousEntry
* PlatformValidSmi
* SmmCpuFeaturesRendezvousExit
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
The timer compare register is 64-bit so simplifying the delay
function.
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Signed-off-by: Tuan Phan <tphan@ventanamicro.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4424
In Relaxed-AP Sync Mode, BSP will not wait for all Aps arrive. However,
PerformRemainingTasks() needs to wait all Aps arrive before calling
SetMemMapAttributes and ConfigSmmCodeAccessCheck() when mSmmReadyToLock
is true. In SetMemMapAttributes(), SmmSetMemoryAttributesEx() will call
FlushTlbForAll() that need to start up the aps. So it need to let all
aps arrive. Same as SetMemMapAttributes(), ConfigSmmCodeAccessCheck()
also will start up the aps.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhihao Li <zhihao.li@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4431
In Ap relaxed mode, some SMI handlers should call SmmWaitForApArrival() to let all ap arrive in SmmCpuRendezvous(). But in traditional mode, these SMI handlers don't need to call SmmWaitForApArrival() again. So it need to be check cpu sync mode before calling SmmWaitForApArrival().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhihao Li <zhihao.li@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Some security features depend on the page table enabling. So, This
patch is to enable paging if it is not enabled (32bit mode)"
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Background:
For arch X64, system will enable the page table in SPI to cover 0-512G
range via CR4.PAE & MSR.LME & CR0.PG & CR3 setting (see ResetVector code).
Existing code doesn't cover the higher address access above 512G before
memory-discovered callback. That will be potential problem if system
access the higher address after the transition from temporary RAM to
permanent MEM RAM.
Solution:
This patch is to migrate page table to permanent memory to map entire physical
address space if CR0.PG is set during temporary RAM Done.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Add a macro USE_5_LEVEL_PAGE_TABLE to determine whether to create
5 level page table.
If macro USE_5_LEVEL_PAGE_TABLE is defined, PML5Table is created
at (4G-12K), while PML4Table is at (4G-16K). In runtime check, if
5level paging is supported, use PML5Table, otherwise, use PML4Table.
If macro USE_5_LEVEL_PAGE_TABLE is not defined, to save space, 5level
paging is not created, and 4level paging is at (4G-12K) and be used.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
In ResetVector, if create page table, its highest address is fixed
because after page table, code layout is fixed(4K for normal code,
and another 4K only contains reset vector code).
Today's implementation organizes the page table as following if 1G
page table is used:
4G-16K: PML4 page (PML4[0] points to 4G-12K)
4G-12K: PDP page
CR3 is set to 4G-16K
When 2M page table is used, the layout is as following:
4G-32K: PML4 page (PML4[0] points to 4G-28K)
4G-28K: PDP page (PDP entries point to PD pages)
4G-24K: PD page mapping 0-1G
4G-20K: PD page mapping 1-2G
4G-16K: PD page mapping 2-3G
4G-12K: PD page mapping 3-4G
CR3 is set to 4G-32K
CR3 doesn't point to a fixed location which is a bit hard to debug at
runtime.
The new page table layout will always put PML4 in highest address
When 1G page table is used, the layout is as following:
4G-16K: PDP page
4G-12K: PML4 page (PML4[0] points to 4G-16K)
When 2M page table is used, the layout is as following:
4G-32K: PD page mapping 0-1G
4G-28K: PD page mapping 1-2G
4G-24K: PD page mapping 2-3G
4G-20K: PD page mapping 3-4G
4G-16K: PDP page (PDP entries point to PD pages)
4G-12K: PML4 page (PML4[0] points to 4G-16K)
CR3 is always set to 4G-12K
So, this patch can improve debuggability by make sure the init
CR3 pointing to a fixed address(4G-12K).
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Combine PageTables1G.asm and PageTables2M.asm to reuse code.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Currently, page table creation has many hard-code values about the
offset to the start of page table. To simplify it, add Labels such
as Pml4, Pdp and Pd, so that we can remove many hard-code values
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
This patch only renames macro, with no code logic impacted.
Two purpose to rename macro:
1. Align some macro name in PageTables1G.asm and PageTables2M.asm, so
that these two files can be easily combined later.
2. Some Macro names such as PDP are not accurate, since 4 level page
entry also uses this macro. PAGE_NLE (no leaf entry) is better
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Update ProcTrace feature code to support enable collect performance
data by generating CYC and TSC packets. Add a new dynamic
PCD to indicate if enable performance collecting. In ProcTrace.c
code, if this new PCD is true, after check cpuid, CYC and TSC
packets will be generated by setting the corresponding MSR bits
feilds if supported.
Bugzila: https://bugzilla.tianocore.org/show_bug.cgi?id=4423
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Xiao X Chen <xiao.x.chen@intel.com>
Update code to support enable ProcTrace only on BSP. Add a new
dynamic PCD to indicate if enable ProcTrace only on BSP. In
ProcTrace.c code, if this new PCD is true, only allocate buffer
and set CtrlReg.Bits.TraceEn to 1 for BSP.
Bugzila: https://bugzilla.tianocore.org/show_bug.cgi?id=4423
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Xiao X Chen <xiao.x.chen@intel.com>
Because UefiCpuPkg/UefiCpuLib is merged to MdePkg/CpuLib and all modules
are updated to not depend on this library, remove it completely.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Yu Pu <yu.pu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
__FUNCTION__ is a pre-standard extension that gcc and Visual C++ among
others support, while __func__ was standardized in C99.
Since it's more standard, replace __FUNCTION__ with __func__ throughout
UefiCpuPkg.
Signed-off-by: Rebecca Cran <rebecca@bsdio.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
The CPU exception handler library code was rewritten at some point to
populate the vector code templates with absolute references at runtime,
given that the XCODE linker does not permit absolute references in
executable code when creating PIE executables.
This is rather unfortunate, as this prevents us from using strict
permissions on the memory mappings, given that the .text section needs
to be writable at runtime for this arrangement to work.
So let's make this hack XCODE-only, by setting a preprocessor #define
from the command line when using the XCODE toolchain, and only including
the runtime fixup code when the macro is defined.
While at it, rename the Xcode5ExceptionHandlerAsm.nasm source file and
drop the Xcode5 prefix: this code is used by other toolchains too.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Ray Ni <ray.ni@intel.com>
The PEI flavor of CpuExceptionHandlerLib never populates more than 32
IDT vectors, and there is no CET shadow stack support in the PEI phase.
So there is no need to use the generic ExceptionHandler NASM source,
which carries a 256-entry template and CET support, and writes to its
own .text section when built using XCODE, which is not permitted in the
PEI phase. So let's switch to the reduced SEC/PEI version of this
component, which is sufficient for PEI and doesn't suffer from the same
issue.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Currently, we use the non-Xcode5 version of ExceptionHandlerAsm.nasm
only for the SEC and PEI phases, and this version was not compatible
with the XCODE or LLD linkers, which do not permit absolute relocations
in read-only sections.
Given that SEC and PEI code typically executes in place from flash and
does not use page alignment for sections, we can simply emit the code
carrying the absolute symbol references into the .data segment instead.
This works around the linker's objections, and the resulting image will
be mapped executable in its entirety anyway. Since this is only needed
for XCODE, let's make this change conditionally using a preprocessor
macro.
Let's rename the .nasm file to reflect the fact that is used for the
SecPei flavor of this library only, and while at it, remove some
unnecessary absolute references.
Also update the Xcode specific version of this library, and use this
source file instead. This is necesessary, as the Xcode specific version
modifies its own code at runtime, which is not permitted in SEC or PEI.
Note that this also removes CET support from the Xcode5 specific build
of the SEC/PEI version of this library, but this is not needed this
early in any case, and this aligns it with other toolchains, which use
this version of the library, which does not have CET support either.
1. Change for non-XCODE SecPeiCpuExceptionHandlerLib:
. Use SecPeiExceptionHandlerAsm.nasm (renamed from
ExceptionHandlerAsm.nasm)
. Removed some unnecessary absolute references
(32 IDT stubs are still in .text.)
2. Change for XCODE SecPeiCpuExceptionHandlerLib:
. Use SecPeiExceptionHandlerAsm.nasm instead of
Xcode5ExceptionHandlerAsm.nasm
. CET logic is not in SecPeiExceptionHandlerAsm.nasm (but aligns to
non-XCODE lib instance)
. Fixed a bug that does runtime fixup in TEXT section in SPI flash.
. Emitted the code carrying the absolute symbol references into the
.data which XCODE or LLD linkers allow.
. Then fixup can be done by other build tools such as GenFv if the code
runs in SPI flash, or by PE coff loader if the code is loaded to
memory.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Fixes CodeQL alerts for CWE-457:
https://cwe.mitre.org/data/definitions/457.html
Cc: Eric Dong <eric.dong@intel.com>
Cc: Erich McMillan <emcmillan@microsoft.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Michael Kubacki <mikuback@linux.microsoft.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Co-authored-by: Erich McMillan <emcmillan@microsoft.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Oliver Smith-Denny <osd@smith-denny.com>
Drop MtrrLibIsPowerOfTwo function, use the new IS_POW2() macro instead.
The ASSERT() removed (inside MtrrLibIsPowerOfTwo) is superfluous,
another ASSERT() a few lines up in MtrrLibCalculateMtrrs() already
guarantees that Length can not be zero at this point.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4353
Due to AMD erratum #1467, an SEV-SNP VMSA should not be 2MB aligned. To
work around this issue, allocate two pages instead of one. Because of the
way that page allocation is implemented, always try to use the second
page. If the second page is not 2MB aligned, free the first page and use
the second page. If the second page is 2MB aligned, free the second page
and use the first page. Freeing in this way reduces holes in the memory
map.
Fixes: 06544455d0 ("UefiCpuPkg/MpInitLib: Use SEV-SNP AP Creation ...")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ray Ni <ray.ni@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=4353
When parking the APs on exiting from UEFI, a new page allocation is made.
This allocation, however, does not end up being marked reserved in the
memory map supplied to the OS. To avoid this, re-use the VMSA by clearing
the VMSA RMP flag, updating the page contents and re-setting the VMSA RMP
flag.
Fixes: 06544455d0 ("UefiCpuPkg/MpInitLib: Use SEV-SNP AP Creation ...")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ray Ni <ray.ni@intel.com>
BufferPages is UINTN, so we need "%Lu" when printing it to avoid
it being truncated. Also cast to UINT64 to make sure it works
for 32bit builds too.
Fixes: 4f441d024b ("UefiCpuPkg/PiSmmCpuDxeSmm: fix error handling")
Reported-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
When TME-MK is enabled, the MtrrLib should substract the TME-MK
reserved bits from the max PA returned from CPUID instruction.
The new test case guarantees such behavior in MtrrLib.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ahmad Anadani <ahmad.anadani@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
CPUID enumeration of MAX_PA is unaffected by TME-MK activation and
will continue to report the maximum physical address bits available
for software to use, irrespective of the number of KeyID bits.
So, we need to check if TME is enabled and adjust the PA size
accordingly.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ahmad Anadani <ahmad.anadani@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
The patch does not change any code behavior but only refactors by:
* replaces the hardcode 0x80000000 with CPUID_EXTENDED_FUNCTION
* replaces the hardcode 0x80000008 with CPUID_VIR_PHY_ADDRESS_SIZE
* replace "UINT32 Eax" with
"CPUID_VIR_PHY_ADDRESS_SIZE_EAX VirPhyAddressSize"
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ahmad Anadani <ahmad.anadani@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
CPUID enumeration of MAX_PA is unaffected by TME-MK activation and
will continue to report the maximum physical address bits available
for software to use, irrespective of the number of KeyID bits.
So, we need to check if TME is enabled and adjust the PA size
accordingly.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ahmad Anadani <ahmad.anadani@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
MtrrLib code queries the CPUID leaf 7h result if support.
Update Test code temporary to claim the CPUID only
supports max leaf as 1 so MtrrLib skips to query CPUID leaf 7h.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ahmad Anadani <ahmad.anadani@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
The random test cases just run for too long that may cause timeout
in CI test.
Disable them for now.
Co-authored-by: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Sean Brogan <sean.brogan@microsoft.com>
Reduce the number of random tests. In previous patch, non-1:1
mapping is enbaled and it may need more than an hour and a half
for the CI test, which may lead to CI timeout. Reduce the number
of random test count to pass the CI.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Add RandomTest for PAE paging.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Modify CpuPageTableLib code to enable PAE paging.
In PageTableMap() API:
When creating new PAE page table, after creating page table,
set all MustBeZero fields of 4 PDPTE to 0. The MustBeZero
fields are treated as RW and other attributes by the common
map logic. So they might be set to 1.
When updating exsiting PAE page table, the special steps are:
1.Prepare 4K-aligned 32bytes memory in stack for 4 temp PDPTE.
2.Copy original 4 PDPTE to the 4 temp PDPTE and set the RW,
UserSupervisor to 1 and set Nx of 4 temp PDPTE to 0.
4.After updating the page table, set the MustBeZero fields of
4 temp PDPTE to 0.
5.Copy the temp PDPTE to original PDPTE.
In PageTableParse() API, also create 4 temp PDPTE in stack.
Copy original 4 PDPTE to the 4 temp PDPTE. Then set the RW,
UserSupervisor to 1 and set Nx of 4 temp PDPTE to 0. Finally
use the address of temp PDPTE as the page table address.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Combine 'if' condition branch for non-present and leaf Parent
Entry in PageTableLibMapInLevel. Most steps of these two condition
are the same. This commit doesn't change any functionality.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Add code to compare ParentPagingEntry Attribute&Mask and input
Attribute&Mask to decide if new next level page table is needed
in non-present ParentPagingEntry condition. This can help avoid
unneccessary page table creation.
For example, there is a page table in which [0, 1G] is mapped(Lv4[0]
,Lv3[0,0], a non-leaf level4 entry and a leaf level3 entry).And we
only want to map [1G, 1G+2M] linear address still as non-present.
The expected behaviour should be nothing happens in the process.
However, previous code logic doesn't check if ParentPagingEntry
Attribute&Mask and input Attribute&Mask are the same in non-present
ParentPagingEntry condition. Then a new 4K memory is allocated for
Lv2 since 1G+2M is not 1G-aligned.
So when ParentPagingEntry is non-present, before allocate 4K memory
for next level paging, we also check if ParentPagingEntry Attribute&
Mask and input Attribute&Mask are the same.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Last commit changed the CpuPageTableLib API PageTableMap, unit
test code should also be modified.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
The definition of IA32_MAP_ATTRIBUTE has 64 bits, and one of the bit
field PageTableBaseAddress is from bit 12 to bit 52. This means if the
compiler treats the 64bits value as two UINT32 value, the field
PageTableBaseAddress spans two UINT32 value. That's why when building in
NOOPT mode in IA32, the below issue is noticed:
unresolved external symbol __allshl
This patch fix the build failure by seperate field PageTableBaseAddress
into two fields, make sure no field spans two UINT32 value.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Ray Ni <ray.ni@intel.com>
Modify RandomTest to check if parameter IsModified of
PageTableMap() correctlly indicates whether input page table
is modified or not.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Add OUTPUT IsModified parameter in PageTableMap() to indicate
if page table has been modified. With this parameter, caller
can know if need to call FlushTlb when the page table is in CR3.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Enable non-1:1 mapping in random test. In previous test, non-1:1
test will fail due to the non-1:1 mapping issue in CpuPageTableLib
and invalid Input Mask when creating new page table or mapping
not-present range. Now these issue have been fixed.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Modify RandomTest to check invalid input. When creating new page
table or updating exsiting page table:
1.If set [LinearAddress, LinearAddress+Length] to non-present, all
other attributes should not be provided.
2.If [LinearAddress, LinearAddress+Length] contain non-present range,
the Returnstatus of PageTableMap() should be InvalidParameter when:
2.1Some of attributes are not provided when mapping non-present range
to present.
2.2Set any other attribute without setting the non-present range to
Present.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Add LastMapEntry pointer to replace MapEntrys->Maps[MapsIndex]
in SingleMapEntryTest () of RandomTest.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Add an input parameter to control the probability of returning
true. Change RandomBoolean() in RandomTest from 50% chance
returning true to returning true with the percentage of input
Probability.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Add manual test case to check input Mask and Attribute. The check
steps are:
1.Create Page table to cover [0, 2G]. All fields of MapMask should
be set.
2.Update Page table to set [2G - 8K,2G] from present to non-present.
All fields of MapMask except present should not be set.
3.Still set [2G - 8K, 2G] as not present, this case is permitted.
But set [2G - 8K, 2G] as RW is not permitted.
4.Update Page table to set [2G - 8K, 2G] as present and RW. All
fields of MapMask should be set.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
For different usage, check if the combination for Mask and
Attr is valid when creating or updating page table.
1.For non-present range
1.1Mask.Present is 0 but some other attributes is provided.
This case is invalid.
1.2Mask.Present is 1 and Attr.Present is 0. In this case,all
other attributes should not be provided.
1.3Mask.Present is 1 and Attr.Present is 1. In this case,all
attributes should be provided to intialize the attribute.
2.For present range
2.1Mask.Present is 1 and Attr.Present is 0.In this case, all
other attributes should not be provided.
All other usage for present range is permitted.
In the mentioned cases, 1.2 and 2.1 can be merged into 1 check.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
In function CreatePageTable(), add code to initialize MapMask to
MAX_UINT64. When creating new page table or map non-present range
to present, all attributes should be provided.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
When splitting leaf parent entry to smaller granularity, create
child page table before modifing parent entry. In previous code
logic, when splitting a leaf parent entry, parent entry will
point to a null 4k memory before child page table is created in
this 4k memory. When the page table to be modified is the page
table in CR3, if the executed CpuPageTableLib code is in the
range mapped by the modified leaf parent entry, then issue will
happen.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Clear PageSize bit(Bit7) for non-leaf entry in PageTableLibSetPnle.
This function is used to set non-leaf entry attributes so it should
make sure that the PageSize bit of the entry should be 0.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
In previous code logic, when splitting a leaf parent entry to
smaller granularity child page table, if the parent entry
Attribute&Mask(without PageTableBaseAddress field) is equal to the
input attribute&mask(without PageTableBaseAddress field), the split
process won't happen. This may lead to failure in non-1:1 mapping.
For example, there is a page table in which [0, 1G] is mapped(Lv4[0]
,Lv3[0,0], a non-leaf level4 entry and a leaf level3 entry). And we
want to remap [0, 2M] linear address range to [1G, 1G + 2M] with the
same attibute. The expected behaviour should be: split Lv3[0,0]
entry into 512 level2 entries and remap the first level2 entry to
cover [0, 2M]. But the split won't happen in previous code since
PageTableBaseAddress of input Attribute is not checked.
So, when checking if a leaf parent entry needs to be splitted, we
should also check if PageTableBaseAddress calculated by parent entry
is equal to the value caculated by input attribute.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Move some local variable initialization to the beginning of the
function. Also delete duplicated calculation for RegionLength.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Add check for input Length in PageTableMap (). Return
RETURN_SUCCESS when input Length is 0.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Remove unneeded 'if' condition in CpuPageTableLib code.
The deleted code is in the code branch for present non-leaf parent
entry. So the 'if' check for (ParentPagingEntry->Pnle.Bits.Present
== 0) is always FALSE.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
For the case CPU logic index is 0, RSP points to the very top of all AP
stacks. That address is not mapped in page table.
Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Ted Kuo <ted.kuo@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
ASSERT() is not proper handling of allocation failures, it gets compiled
out on RELEASE builds. Print a message and enter dead loop instead.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
It's highly unlikely the code ever runs on processors which are
almost 30 years old. Drop the code handling them.
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=4345
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4368
This issue is caused by the commit:
ec07fd0e35
GetFirstGuidHob() should not be used after exit boot service.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Because UefiCpuPkg/UefiCpuLib is merged to MdePkg/CpuLib, remove the
dependency of UefiCpuLib.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Yu Pu <yu.pu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
There are two libraries: MdePkg/CpuLib and UefiCpuPkg/UefiCpuLib. This
patch merges UefiCpuPkg/UefiCpuLib to MdePkg/CpuLib.
Change-Id: Ic26f4c2614ed6bd9840f817d50e47ac1de4bd013
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Yu Pu <yu.pu@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4360
An incorrect format specifier is being used in a DEBUG print,
specifically, a variable of type EFI_STATUS was being printed with
the %a format specifier (pointer to an ASCII string), thus the value of
the Status variable was being treated as the address of a string,
leading to a CPU exception, when encountered this bug manifests itself
as a hang near "Ready to Boot Event", with the last DEBUG print being
"INFO: Got MicrocodePatchHob with microcode patches starting address"
followed by a CPU Exception dump.
Signed-off-by: Darbin Reyes <darbin.reyes@intel.com>
Reviewed-by: Jacob Narey <jacob.narey@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
RegisterCpuInterruptHandler did not allow setting
exception handlers for anything beyond the timer IRQ.
Beyond that, it didn't meet the spec around handling
of inputs.
RiscVSupervisorModeTrapHandler now will invoke
set handlers for both exceptions and interrupts.
Two arrays of handlers are maintained - one for exceptions
and one for interrupts.
For unhandled traps, RiscVSupervisorModeTrapHandler dumps
state using the now implemented DumpCpuContext.
For EFI_SYSTEM_CONTEXT_RISCV64, extend this with the trapped
PC address (SEPC), just like on AArch64 (ELR). This is
necessary for X86EmulatorPkg to work as it allows a trap
handler to return execution to a different place. Add
SSTATUS/STVAL as well, at least for debugging purposes. There
is no value in hiding this.
Fix nested exception handling. Handler code should not
be saving SIE (the value is saved in SSTATUS.SPIE) or
directly restored (that's done by SRET). Save and
restore the entire SSTATUS and STVAL, too.
Cc: Daniel Schaefer <git@danielschaefer.me>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
Signed-off-by: Andrei Warkentin <andrei.warkentin@intel.com>
The TimerDxe implementation doesn't account for the physical
time passed due to timer handler execution or (perhaps even
more importantly) time spent with interrupts masked.
Other implementations (e.g. like the Arm one) do. If the
timer tick is always incremented at a fixed rate, then
you can slow down UEFI's perception of time by running
long sections of code in a critical section.
Cc: Daniel Schaefer <git@danielschaefer.me>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
Signed-off-by: Andrei Warkentin <andrei.warkentin@intel.com>
Rename AsmRelocateApLoopStart to AsmRelocateApLoopStartAmdSev
Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Add the 'AsmRelocateApLoopStartGeneric' for X64 processors except 64-bit
AMD processors with SEV-ES.
Remove the unused arguments of AsmRelocateApLoopStartGeneric, updated
the stack offset.
Create PageTable for the allocated reserved memory.
Only keep 4GB limitation of memory allocation for the case APs still
need to be transferred to 32-bit mode before OS.
Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Add the union RELOCATE_AP_LOOP_ENTRY, split the path in RelocateApLoop
into two:
1. 64-bit AMD processors with SEV-ES
2. Intel processors (32-bit or 64-bit), 32-bit AMD processors, or
64-bit AMD processors without SEV-ES.
Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Check if AP_SAFE_STACK_SIZE is aligned with CPU_STACK_ALIGNMENT
during build time.
No functional or structural changes.
Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4337
This patch is to avoid configure SMBASE if SmBase relocation has been
done. If gSmmBaseHobGuid found, means SmBase info has been relocated
and recorded in the SmBase array. No need to do the relocation in
SmmCpuFeaturesInitializeProcessor().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4337
Existing SMBASE Relocation is in the PiSmmCpuDxeSmm driver, which
will relocate the SMBASE of each processor by setting the SMBASE
field in the saved state map (at offset 7EF8h) to a new value.
The RSM instruction reloads the internal SMBASE register with the
value in SMBASE field when each time it exits SMM. All subsequent
SMI requests will use the new SMBASE to find the starting address
for the SMI handler (at SMBASE + 8000h).
Due to the default SMBASE for all x86 processors is 0x30000, the
APs' 1st SMI for rebase has to be executed one by one to avoid
the processors over-writing each other's SMM Save State Area (see
existing SmmRelocateBases() function), which means the next AP has
to wait for the previous AP to finish its 1st SMI, then it can call
into its 1st SMI for rebase via Smi Ipi command, thus leading the
existing SMBASE Relocation has to be running in series. Besides, it
needs very complex code to handle the AP exit semaphore
(mRebased[Index]), which will hook return address of SMM Save State
so that semaphore code can be executed immediately after AP exits
SMM for SMBASE relocation (see existing SemaphoreHook() function).
With SMM Base Hob support, PiSmmCpuDxeSmm does not need the RSM
instruction to do the SMBASE Relocation. SMBASE Register for each
processors have already been programmed and all SMBASE address have
recorded in SMM Base Hob. So the same default SMBASE Address
(0x30000) will not be used, thus the processors over-writing each
other's SMM Save State Area will not happen in PiSmmCpuDxeSmm driver.
This way makes the first SMI init can be executed in parallel and
save boot time on multi-core system. Besides, Semaphore Hook code
logic is also not required, which will greatly simplify the SMBASE
Relocation flow.
Mainly changes as below:
* Assume the biggest possibility of tile size is 8k.
* Combine 2 SMIs (gcSmmInitTemplate & gcSmiHandlerTemplate) into one
(gcSmiHandlerTemplate), the new SMI handler needs to run to 2 paths:
one to SmmCpuFeaturesInitializeProcessor(), the other to SMM Core
Entry Point.
* Issue SMI IPI (All Excluding Self SMM IPI + BSP SMM IPI) for first
SMI init before normal SMI sources happen.
* Call SmmCpuFeaturesInitializeProcessor() in parallel.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4337
The default SMBASE for the x86 processor is 0x30000. When
SMI happens, processor runs the SMI handler at SMBASE+0x8000.
Also, the SMM save state area is within SMBASE+0x10000.
One of the SMM initialization from processor perspective is to
relocate and program the new SMBASE (in TSEG range) for each
processor. When the SMBASE relocation happens in a PEI module,
the PEI module shall produce the SMM_BASE_HOB in HOB database
which tells the PiSmmCpuDxeSmm driver (runs at a later phase)
about the new SMBASE for each processor. PiSmmCpuDxeSmm driver
installs the SMI handler at the SMM_BASE_HOB.SmBase[Index]+0x8000
for processor Index. When the HOB doesn't exist, PiSmmCpuDxeSmm
driver shall relocate and program the new SMBASE itself.
This patch adds the SMM Base HOB for any PEI module to do
the SmBase relocation ahead of PiSmmCpuDxeSmm driver and
store the relocated SmBase address in array for each
processor.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@Intel.com>
This patch is to replace mIsBsp by mBspApicId check.
mIsBsp becomes the local variable (IsBsp), then it can be
checked dynamically in the function. Instead, we define the
mBspApicId, which is to record the BSP ApicId used for
compare in SmmInitHandler. With this change, SmmInitHandler
can be run in parallel during SMM init.
Note:
This patch is the per-prepared work by refining the
SmmInitHandler, then, we can do the next step to
combine 2 SMIs (gcSmmInitTemplate & gcSmiHandlerTemplate)
into one (gcSmiHandlerTemplate), the new SMI handler
will call the SmmInitHandler in parallel to do the init.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4338
No need call InitializeMpSyncData during normal boot SMI init,
because mSmmMpSyncData is NULL at that time. mSmmMpSyncData is
allocated in InitializeMpServiceData, which is invoked after
normal boot SMI init (SmmRelocateBases).
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@Intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4076
RISC-V register names do not follow the EDK2 formatting.
So, add it to ignore list for now.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Acked-by: Abner Chang <abner.chang@amd.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Acked-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4076
This is copied from
edk2-platforms/Silicon/RISC-V/ProcessorPkg/Universal/CpuDxe
and added the RISCV_EFI_BOOT_PROTOCOL support.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Acked-by: Abner Chang <abner.chang@amd.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4076
This DXE module initializes the timer interrupt handler
and installs the Arch Timer protocol.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Acked-by: Abner Chang <abner.chang@amd.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
Acked-by: Ray Ni <ray.ni@Intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4076
Add the RISC-V instance of the TimerLib.
This is mostly copied from
edk2-platforms/Silicon/RISC-V/ProcessorPkg/Library/RiscVTimerLib
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Cc: Abner Chang <abner.chang@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Acked-by: Abner Chang <abner.chang@amd.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
Acked-by: Ray Ni <ray.ni@Intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4076
Add Cpu Exception Handler library for RISC-V. This is copied
from edk2-platforms/Silicon/RISC-V/ProcessorPkg/Library/RiscVExceptionLib
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Cc: Abner Chang <abner.chang@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Acked-by: Abner Chang <abner.chang@amd.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
Acked-by: Ray Ni <ray.ni@Intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4076
RISC-V UEFI based platforms need to support RISCV_EFI_BOOT_PROTOCOL.
Add this protocol GUID definition and the header file required.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Acked-by: Abner Chang <abner.chang@amd.com>
Reviewed-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
Acked-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4246
In function InitPaging, NumberOfPml5Entries is calculated by below code
NumberOfPml5Entries = (UINTN)LShiftU64 (1, SizeOfMemorySpace - 48);
If the SizeOfMemorySpace is larger than 48, NumberOfPml5Entries will be
larger than 1. However, this doesn't make sense if the hardware doesn't
support 5 level page table.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Wu, Jiaxin <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Eric Dong <eric.dong@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
This reverts commit 73ccde8f6d since it
results in a hang of the IA32 processor and needs further clean-up.
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=4234
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Commit 0426115b67 ("UefiCpuPkg: Remove unused API in
SmmCpuFeaturesLib.h", 2022-12-21) removed the declaration of the function
SmmCpuFeaturesAllocatePageTableMemory() from the "SmmCpuFeaturesLib.h"
library class header.
Remove the API's (null-)implementation from UefiCpuPkg/SmmCpuFeaturesLib
as well.
Build-tested with:
build -a IA32 -a X64 -b NOOPT -p UefiCpuPkg/UefiCpuPkg.dsc -t GCC5
Cc: Eric Dong <eric.dong@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=4235
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Dun Tan <dun.tan@intel.com>
When setting new page table pool to RO, only disable/enable WP when
Cr0.WP has been set to 1 to fix potential PF caused by b822be1a20
(UefiCpuPkg/PiSmmCpuDxeSmm: Introduce page table pool mechanism).
With previous code, if someone want to modify the page table and
Cr0.WP has been cleared before modify page table, Cr0.WP may be set
to 1 again since new pool may be generated during this process
Then PF fault may happens.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Simplify the code to set memory used by smm page table as RO.
Since memory used by smm page table are in PageTablePool list,
we only need to set all PageTablePool as ReadOnly in smm page
table itself. Also, we only need to flush tlb once after
setting all page table pool as Read Only.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Remove SmmCpuFeaturesAllocatePageTableMemory in this headfile.
This API is not used by PiSmmCpuDxeSmm driver any more. Also
no other files use this API.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Introduce page table pool mechanism for smm page table to simplify
page table memory management and protection. This mechanism has been
used in DxeIpl. The basic idea is to allocate a bunch of continuous
pages of memory in advance, and all future page tables consumption
will happen in those pool instead of system memory.
Since we have centralized page tables, we only need to mark all page
table pools as RO, instead of searching page table memory layer by
layer in smm page table. Once current page table pool has been used
up, another memory pool will be allocated and the new pool will also
be set as RO if current page table memory has been marked as RO.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
During the finalization of Mp initialization before booting into the OS,
depending on whether Mwait is supported or not, AsmRelocateApLoop
places Aps in MWAIT-loop or HLT-loop.
Since paging is necessary for long mode, the original implementation of
moving APs to 32-bit was to disable paging to ensure that the booting
does not crash.
The current modification creates a page table in reserved memory,
avoiding switching modes and reclaiming memory by OS. This modification
is only for 64 bit mode.
More specifically, we keep the AMD logic as the original code flow,
extract and update the Intel-related code, where the APs would stay
in 64-bit, and run in a Mwait or Hlt loop until the OS wake them up.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
AsmRelocateApLoop is replicated for future Intel Logic Extraction,
further brings AP into 64-bit, and enables paging.
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=4195
1.Updated the GDT table in VTF0 to align with the one in S3Resume2Pei.
By doing so can simplify the changes to enable S3 in 64bit PEI.
2.Use SwitchStack() between PEI and SMM in S3 resume path when both
are in the same execution mode.
3.Transfer from PEI to OS waking vector by calling SwitchStack() when
both are in the same execution mode.
4.Removed the debug assertion in S3Resume.c to support 64bit PEI.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Chasel Chiu <chasel.chiu@intel.com>
Cc: Nate DeSimone <nathaniel.l.desimone@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Ashraf Ali S <ashraf.ali.s@intel.com>
Cc: Chinni B Duggapu <chinni.b.duggapu@intel.com>
Signed-off-by: Ted Kuo <ted.kuo@intel.com>
When build in DEBUG, the code asserts that 5LPage support is there
when the physical address width is larger than 48.
In a RELEASE build it will just force LA57 to 1 in CR4
even if CPUID(7).ECX[16] says it is not supported.
UefiCpuPkg: Bug fix in 5LPage handling
The hang (in the ASSERT) in DEBUG is not warranted as there are
legal configurations with CPUID(7).ECX[16](==LA57)=0
and with a physical address width of larger than 48 (like 52).
This is also supported by this code:
https://github.com/tianocore/edk2/blob/master/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c#L221
There (as long as physical address width is smaller or equal to 52)
any address width above 48 will be reduced to 48 and the
system can and will work without 5LPaging.
The forced setting of LA57 in CR4 (in the absence of LA57 in CPUID(7).ECX)
is a spec violation and should not happen.
Hence the proposed fix
a) removes the assert.
b) only returns TRUE from Is5LevelPagingNeeded if 5LPaging is actually
supported by HW.
Signed-off-by: Robert Guenzel <robert.guenzel@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4173
Due to more core count increasement, it's hard to reflect all APs
state via AP bitvector support in the register. Actually, SMM CPU
driver doesn't need to check each AP state to know all CPUs in SMI
or not, one alternative method is to check the SMM Delayed & Blocked
AP Count number:
APs in SMI + Blocked Count + Disabled Count >= All supported Aps
(code comments explained why can be > All supported Aps)
With above change, the returned value of "SmmRegSmmEnable" &
"SmmRegSmmDelayed" & "SmmRegSmmBlocked" from SmmCpuFeaturesLib
should be the AP count number within the existing CPU package.
For register that return the bitvector state, require
SmmCpuFeaturesGetSmmRegister() returns count number of all bit per
logical processor within the same package.
For register that return the AP count, require
SmmCpuFeaturesGetSmmRegister() returns the register value directly.
v3:
- Refine the coding style
v2:
- Rename "mPackageBspInfo" to "mPackageFirstThreadIndex"
- Clarify the expected value of "SmmRegSmmEnable" & "SmmRegSmmDelayed" &
"SmmRegSmmBlocked" returned from SmmCpuFeaturesLib.
- Thread: https://edk2.groups.io/g/devel/message/96722
v1:
- Thread: https://edk2.groups.io/g/devel/message/96671
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
The code changes develop UEFI application and dynamic command for
EfiMpServiceProtocol unit tests based on current UnitTestFramework.
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Dun Tan <dun.tan@intel.com>
Move the implementation of EfiMpServiceProtocol unit tests in a separate
function in preparation for developing the UEFI application and dynamic
command for the same unit tests.
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Dun Tan <dun.tan@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4123
APIs which are defined in CcExitLib.h are added with the CcExit prefix.
This is to make the APIs' name more meaningful.
This change impacts OvmfPkg/UefiCpuPkg.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4123
VmgExitLib once was designed to provide interfaces to support #VC handler
and issue VMGEXIT instruction. After TDVF (enable TDX feature in OVMF) is
introduced, this library is updated to support #VE as well. Now the name
of VmgExitLib cannot reflect what the lib does.
This patch renames VmgExitLib to CcExitLib (Cc means Confidential
Computing). This is a simple renaming and there is no logic changes.
After renaming all the VmgExitLib related codes are updated with
CcExitLib. These changes are in OvmfPkg/UefiCpuPkg/UefiPayloadPkg.
Cc: Guo Dong <guo.dong@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: James Lu <james.lu@intel.com>
Reviewed-by: Gua Guo <gua.guo@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4140
Some implementations may need to keep the initial Reset code to be
separated out from rest of the code.This request is to add padding at
lower 4K region below 4 GB which will result having only few jmp
instructions and data at that region.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Duggapu Chinni B <chinni.b.duggapu@intel.com>
BZ# 4093: Abstract SmmCpuFeaturesLib for sharing common code
Remove the header files those are already included in
CpuFeatureLib.h.
Signed-off-by: Abner Chang <abner.chang@amd.com>
Cc: Abdul Lateef Attar <abdattar@amd.com>
Cc: Garrett Kirkendall <garrett.kirkendall@amd.com>
Cc: Paul Grimes <paul.grimes@amd.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
BZ# 4093: Abstract SmmCpuFeaturesLib for sharing common code
This change stripped away the code that can be
shared with other archs or vendors from Intel
implementation and put in to the common file,
leaves the Intel X86 implementation in the
IntelSmmCpuFeatureLib. Also updates the header
file and INF file.
Signed-off-by: Abner Chang <abner.chang@amd.com>
Cc: Abdul Lateef Attar <abdattar@amd.com>
Cc: Garrett Kirkendall <garrett.kirkendall@amd.com>
Cc: Paul Grimes <paul.grimes@amd.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
BZ# 4093: Abstract SmmCpuFeaturesLib for sharing common code
Rename SmmCpuFeaturesLiCommon.c to
IntelSmmCpuFeaturesLib, because it was developed
specifically for Intel implementation. The code
that can be shared by other archs or vendors
will be stripped away and put in the common
file in the next patch.
Signed-off-by: Abner Chang <abner.chang@amd.com>
Cc: Abdul Lateef Attar <abdattar@amd.com>
Cc: Garrett Kirkendall <garrett.kirkendall@amd.com>
Cc: Paul Grimes <paul.grimes@amd.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Disable/Restore HpetTimer before and after running the Dxe
CpuExceptionHandlerLib unit test module. During the UnitTest, a
new Idt is initialized for the test. There is no handler for timer
intrrupt in this new idt. After the test module, HpetTimer does
not work any more since the comparator value register and main
counter value register for timer does not match. To fix this issue,
disable/restore HpetTimer before and after Unit Test if HpetTimer
driver has been dispatched. We don't need to send Apic Eoi in this
unit test module.When disabling timer, after RaiseTPL(), if there
is a pending timer interrupt, bit64 of Interrupt Request Register
(IRR) will be set to 1 to indicate there is a pending timer
interrupt. After RestoreTPL(), CPU will handle the pending
interrupt in IRR.Then TimerInterruptHandler calls SendApicEoi().
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
The code changes add unit tests based on current UnitTestFramework.
EdkiiPeiMpServices2PpiPeiUnitTest PEI module is used to test
EdkiiPeiMpServices2Ppi and EfiMpServiceProtocolDxeUnitTest DXE driver is
used to test EfiMpServiceProtocol.
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Add GENERAL_REGISTER.R8/R9 etc in EccCheck ExceptionList
of UefiCpuPkg/UefiCpuPkg.ci.yaml to pass CI EccCheck.R8/R9
in structure GENERAL_REGISTER of CpuExceptionHandlerTest.h
lead to EccCheck failure since no lower case characters in
R8/R9/R10 etc.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Add Pei/DxeCpuExceptionHandlerLibUnitTest module in UefiCpuPkg.dsc
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
The previous change adds unit test for DxeCpuExeptionHandlerLib
in 64bit mode. This change create a PEIM to add unit test for
PeiCpuExceptionHandlerLib based on previous change.It can run
in both 32bit and 64bit modes.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Add target based unit tests for the DxeCpuExceptionHandlerLib.
A DXE driver is created to test DxeCpuExceptionHandlerLib.
Four test cases are created in this Unit Test module:
a.Test if exception handler can be registered/unregistered
for no error code exception.In the test case, only no error
code exception is triggered and tested by INTn instruction.
b.Test if exception handler can be registered/unregistered
for GP and PF. In the test case, GP exception is triggered
and tested by setting CR4_RESERVED_BIT to 1. PF exception
is triggered by writting to not-present or RO address.
c.Test if CpuContext is consistent before and after exception.
In this test case:
1.Set Cpu register to mExpectedContextInHandler before
exception. 2.Trigger exception specified by ExceptionType.
3.Store SystemContext in mActualContextInHandler and set
SystemContext to mExpectedContextAfterException in handler.
4.After return from exception, store Cpu registers in
mActualContextAfterException.
The expectation is:
1.Register values in mActualContextInHandler are the same
with register values in mExpectedContextInHandler.
2.Register values in mActualContextAfterException are the
same with register values mActualContextAfterException.
d.Test if stack overflow can be captured by CpuStackGuard
in both Bsp and AP. In this test case, stack overflow is
triggered by a funtion which calls itself continuously.
This test case triggers stack overflow in both BSP and AP.
All AP use same Idt with Bsp. The expectation is:
1. PF exception is triggered (leading to a DF if sepereated
stack is not prepared for PF) when Rsp<=StackBase+SIZE_4KB
since [StackBase, StackBase + SIZE_4KB] is marked as not
present in page table when PcdCpuStackGuard is TRUE.
2. Stack for PF/DF exception handler in both Bsp and AP is
succussfully switched by InitializeSeparateExceptionStacks.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Support PAE paging for PageTableParse API in CpuPageTableLib.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
The PEI instance of the CpuExceptionHandlerLib didn't implement the
RegisterCpuInterruptHandler() API. This patch adds the missing API.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4083
In CPU relaxed mode, it doesn't reset the value of
mSmmMpSyncData->AllApArrivedWithException when BSP exit smm mode.
So this patch will reset this variable.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhihao Li <zhihao.li@intel.com>
Reviewed-by: Abner Chang <abner.chang@amd.com>
This commit is a code optimization to allow bigger seperate stack size in
ArchSetupExceptionStack. In previous code logic, CPU_STACK_ALIGNMENT bytes
will be wasted if StackTop is already CPU_STACK_ALIGNMENT aligned.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Abner Chang <abner.chang@amd.com>
Parallelly run the function to SeparateExceptionStacks for all CPUs and
allocate buffers together for better performance.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
To remove the dependency of CPU register, 4/8 byte at the top of the
stack is occupied for CpuMpData. BIST information is also taken care
here. This modification is only for PEI phase, since in DXE phase
CpuMpData is accessed via global variable.
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
The API of InitializeSeparateExceptionStacks is just changed before, and
makes the struct CPU_EXCEPTION_INIT_DATA an internal definition.
Furthermore, we can even remove the struct to make core simpler.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
When switch bsp, old bsp and new bsp put CR0/CR4 into stack, and put IDT
and GDT register into a structure. After they exchange their stack, they
restore these registers. This logic is now implemented by assembly code.
This patch aims to reuse (Save/Restore)VolatileRegisters function to
replace such assembly code for better code readability.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3962
Two SMM variables (mSmrrSupported & mSmmFeatureControlSupported) are global
variables, they control whether the SMRR and SMM Feature Control MSR will
be restored respectively.
To avoid the TOCTOU, add PCD to control SMRR & SmmFeatureControl enable.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Currently, when waking up AP, IDT table of AP will be set in 16 bit code,
and assume the IDT table base is 32 bit. However, the IDT table is created
by BSP. Issue will happen if the BSP allocates memory above 4G for BSP's
IDT table. Moreover, even the IDT table location is below 4G, the handler
function inside the IDT table is 64 bit, and it won't take effect until
CPU transfers to 64 bit long mode. There is no benefit to set IDT table in
such an early phase.
To avoid such issue, this patch moves the LIDT instruction into 64 bit
code.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Add host based unit tests for the CpuPageTableLib services.
Unit test focuses on PageTableMap function, containing two kinds of test
cases: manual test case and random test case.
Manual test case creates some corner case to test function PageTableMap.
Random test case generates multiple random memory entries (with random
attribute) as the input of function PageTableMap to get the output
pagetable. Output pagetable will be validated and be parsed to get output
memory entries, and then the input and output memory entries will be
compared to verify the functionality.
The unit test is not perfect yet. There are options for random test, and
some of them control the test coverage, and some option are not ready.
Will enhance in the future.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
This reverts commit 2812668bfc for tag202208.
This feature will be merged after stable tag 202208 is created.
Signed-off-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Zhiguang Liu <zhiguang.liu@intel.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Add host based unit tests for the CpuPageTableLib services.
Unit test focuses on PageTableMap function, containing two kinds of test
cases: manual test case and random test case.
Manual test case creates some corner case to test function PageTableMap.
Random test case generates multiple random memory entries (with random
attribute) as the input of function PageTableMap to get the output
pagetable. Output pagetable will be validated and be parsed to get output
memory entries, and then the input and output memory entries will be
compared to verify the functionality.
The unit test is not perfect yet. There are options for random test, and
some of them control the test coverage, and some option are not ready.
Will enhance in the future.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
This patch is code refactoring and doesn't change any functionality.
Remove mInternalCr3 in PiSmmCpuDxe pagetable related code. In previous
code, mInternalCr3 is used to pass address of page table which is
different from Cr3 register in different level of SetMemoryAttributes
function. Now remove it and pass the page table base address from the
root function parameter to simplify the code logic.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
This patch is code refactoring and doesn't change any functionality.
Add a new mIsShadowStack flag to identify whether current memory is
shadow stack. Previous smm code logic regards a RO range as shadow
stack and set the dirty bit in corresponding page table entry if
mInternalCr3 is not 0, which may be confusing.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
The change doesn't change functionality behavior.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
With following paging structure to map
[2M-4K, 2M] as P = 1, RW = 0,
[2M, 4M] as P = 1, RW = 1:
PML4[0] -> PDPTE[0] -> PDE[0](RW = 0) -> PTE[255](P = 0, RW = 0)
-> PDE[1](RW = 1)
When a new request to map [2M-4K, 2M+4K] as P = 1, RW = 1,
CpuPageTableMap() wrongly requests 4K buffer size for the new mapping
request.
But in fact, for [2M-4K, 2M] request, PTE[255] can be changed in place,
for [2M, 2M+4K], no change is needed because PDE[1].RW = 1 already.
The change fixes the bug.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
With the following paging structure that maps [0, 2G] with ReadWrite
bit set.
PML4[0] --> PDPTE[0] --> PDE[0-255]
\-> PDPTE[1] --> PDE[0-255]
If ReadWrite bit is cleared in PML4[0] and PageTableMap() is called
to change [0, 2M] as read-only, today's logic unnecessarily changes
the paging structure in 2 aspects:
1. When setting PageTableBaseAddress in the entry, the code clears
all attributes.
2. Even the ReadWrite bit in parent entry is not set, the code clears
the ReadWrite bit in the leaf entry.
First change is wrong. It should not change other attributes when
setting the PA.
Second change is unnecessary. Because the parent entry already
declares the whole region as read-only, there is no need to clear
ReadWrite bit in the leaf entry again.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
With the following paging structure that maps [0, 2G] with ReadWrite
bit set.
PML4[0] --> PDPTE[0] --> PDE[0-255]
\-> PDPTE[1] --> PDE[0-255]
If ReadWrite bit is cleared in PML4[0] and PageTableMap() is called
to change [0, 2M] as writable, today's logic doesn't inherit the
parent entry's attributes when determining the child entry's
attributes. It just sets the PDPTE[0].PDE[0].ReadWrite bit.
But since the PML4[0].ReadWrite is 0, [0, 2M] is still read-only.
The change fixes the bug.
If the inheritable attributes in ParentPagingEntry conflicts with the
requested attributes, let the child entries take the parent attributes
and loosen the attribute in the parent entry.
E.g.: when PDPTE[0].ReadWrite = 0 but caller wants to map [0-2MB as
ReadWrite = 1 (PDE[0].ReadWrite = 1), we need to change
PDPTE[0].ReadWrite = 1 and let all PDE[0-255].ReadWrite = 0 first.
Then change PDE[0].ReadWrite = 1.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Today's logic wrongly treats the non-leaf entry as leaf entry and
updates its paging attributes.
The patch fixes the bug to only update paging attributes for
non-present entries or leaf entries.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
When PageTableMap() is called to create non 1:1 mapping
such as [0, 1G) to [8K, 1G+8K), it should split the page entry to the
4K page level, but old logic has a bug that it just uses 1G page
entry.
The patch fixes the bug.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
The patch replaces
LinearAddress + Offset == RegionStart
with
((LinearAddress + Offset) & RegionMask) == 0
The replace should not cause any behavior change.
Because:
1. In first loop of while when LinearAddress + Offset == RegionStart,
because the lower "BitStart" bits of RegionStart are all-zero,
all lower "BitStart" bits of (LinearAddress + Offset) are all-zero.
Because all lower "BitStart" bits of RegionMask is all-one and
bits are all-zero, ((LinearAddress + Offset) & RegionMask) == 0.
2. In following loops of the while, even RegionStart is increased
by RegionLength, the lower "BitStart" bits are still all-zero.
So the two expressions still semantically equal to each other.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
When LinearAddress or Length is not aligned on 4KB, PageTableMap()
should return Invalid Parameter.
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
The lib includes two APIs:
* PageTableMap
It creates/updates mapping from LA to PA.
The implementation only supports paging structures used in 64bit
mode now. PAE paging structure support will be added in future.
* PageTableParse
It parses the page table and returns the mapping relations in an
array of IA32_MAP_ENTRY.
It passed some stress tests. These test code will be upstreamed in
other patches following edk2 Unit Test framework.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
CPU_EXCEPTION_INIT_DATA is now an internal implementation of
CpuExceptionHandlerLib. Union can be removed since Ia32 and X64 have the
same definition. Also, two fields (Revision and InitDefaultHandlers)are
useless, can be removed.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Since the API InitializeSeparateExceptionStacks is simplified and does't
use the struct CPU_EXCEPTION_INIT_DATA, CPU_EXCEPTION_INIT_DATA become
a inner implementation of CpuExcetionHandlerLib.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jian J Wang <jian.j.wang@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Hide the Exception implementation details in CpuExcetionHandlerLib and
caller only need to provide buffer
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Reviewed-by: Sami Mujawar <sami.mujawar@arm.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Currently, "push byte %[Vector]" causes nasm warning when Vector is larger
than 0x7F. This is because push accepts a signed value, and byte means
signed int8. Maximum signed int8 is 0x7F.
When Vector is larger the 0x7F, for example, when Vector is 255, byte 255
turns to -1, and causes the warning "signed byte value exceeds".
To avoid such warning, use dword instead of byte, this will increase 3 bytes
for each IdtVector.
For IA32, the size of IdtVector will increase from 10 bytes to 13 bytes.
For X64, the size of IdtVector will increase from 15 bytes to 18 bytes.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Harry Han <harry.han@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3957
The reserved IDT table size in SecCore is too small for X64. Changed the type
of IdtTable in SEC_IDT_TABLE from UINT64 to IA32_IDT_GATE_DESCRIPTOR to have
sufficient size reserved in IdtTable for X64. dff
Cc: Chasel Chiu <chasel.chiu@intel.com>
Cc: Nate DeSimone <nathaniel.l.desimone@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Ashraf Ali S <ashraf.ali.s@intel.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Harry Han <harry.han@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Signed-off-by: Ted Kuo <ted.kuo@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Add debug messages to make it easier to verify PlatformSecLib
is passing the data properly.
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Harry Han <harry.han@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Signed-off-by: Isaac Oram <isaac.w.oram@intel.com>
A memory range can be submitted for attribute changes which is large
enough to not require a page split during the attribute update. Consider
the following scenario:
1. An attribute update removed the RW attribute on a range large enough
to not require a page split.
2. Later, an attributes update is called to re-add the RW attribute for
a subsection of that larger page which requires a split
3. The attribute update logic performs a page split, so now the parent
and child pages have matching attributes
4. Then, the attribute update logic changes the child page to have the
RW attribute.
5. The child page would then correctly have the RW attribute added but
the parent page would still have the RW attribute removed which will
cause an improper access violation.
The page being split should have loose attributes to accommodate the
above case. The split page should always have the attributes set so
the lowest level page frame determines the access rights as detailed
in 4.10.2.2 of the Intel 64 and IA-32 Architectures Software
Developer Manual. Setting the User/Supervisor attribute shouldn't
be necessary.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Taylor Beebe <t@taylorbeebe.com>
The AP vector consists of 2 parts:
1. the initial 16-bit code that should be under 1MB and page aligned.
2. the 32-bit/64-bit code that can be anywhere in the memory with any
alignment.
The need of part #2 is because the memory under 1MB is temporary
"stolen" for use and will "give" back after all AP wake up. The range
of memory is not marked as code page in page table. CPU may trigger
exception as soon as NX is enabled.
The part #2 memory allocation can be done in the MpInitLibInitialize.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Today's implementation allocates below 1MB memory for the 16bit, 32bit
and 64bit code.
But it's not necessary since now the 32bit and 64bit code run at high
memory no matter in PEI and DXE phase.
The patch simplifies the logic to remove the code that handles the
case when WakeupBufferHigh is 0.
It also reduce the memory foot print under 1MB by allocating
memory for 16bit code only.
MP_CPU_EXCHANGE_INFO is still under 1MB which is immediate
after the 16bit code.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
The patch does several simplifications:
1. Treat SwitchToRealProc as part of RendezvousFunnelProc.
So the common logic in MpLib.c doesn't need to be aware of
SwitchToRealProc.
As a result, SwitchToRealSize/Offset are removed from
MP_ASSEMBLY_ADDRESS_MAP.
2. Move SwitchToRealProc to AmdSev.nasm.
All other assembly code in AmdSev.nasm is called through
OneTimeCall.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
global in NASM file is used for symbols that are
referenced in C files.
Remove unneeded global keyword in NASM file.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Today's implementation assumes PEI phase runs at 32bit so
the execution-disable feature is not applicable.
It's not always TRUE.
The patch allocates 32bit&64bit code buffer for PEI phase as well.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Today InitializeCpuExceptionHandlersEx is called from three modules:
1. DxeCore (links to DxeCpuExceptionHandlerLib)
DxeCore expects it initializes the IDT entries as well as
assigning separate stacks for #DF and #PF.
2. CpuMpPei (links to PeiCpuExceptionHandlerLib)
and CpuDxe (links to DxeCpuExceptionHandlerLib)
It's called for each thread for only assigning separate stacks for
#DF and #PF. The IDT entries initialization is skipped because
caller sets InitData->X64.InitDefaultHandlers to FALSE.
Additionally, SecPeiCpuExceptionHandlerLib, SmmCpuExceptionHandlerLib
also implement such API and the behavior of the API is simply to initialize
IDT entries only.
Because it mixes the IDT entries initialization and separate stacks
assignment for certain exception handlers together, in order to know
whether the function call only initializes IDT entries, or assigns stacks,
we need to check:
1. value of InitData->X64.InitDefaultHandlers
2. library instance
This patch cleans up the code to separate the stack assignment to a new API:
InitializeSeparateExceptionStacks().
Only when caller calls the new API, the separate stacks are assigned.
With this change, the SecPei and Smm instance can return unsupported which
gives caller a very clear status.
The old API InitializeCpuExceptionHandlersEx() is removed in this patch.
Because no platform module is consuming the old API, the impact is none.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
InitializeCpuExceptionHandlers() expects caller allocates IDT while
InitializeCpuInterruptHandlers() allocates 256 IDT entries itself.
InitializeCpuExceptionHandlers() fills max 32 IDT entries allocated
by caller. If caller allocates 10 entries, the API just fills 10 IDT
entries.
The inconsistency between the two APIs makes code hard to
unerstand and hard to share.
Because there is only one caller (CpuDxe) for
InitializeCpuInterruptHandler(), this patch updates CpuDxe driver
to allocates 256 IDT entries then call
InitializeCpuExceptionHandlers().
This is also a backward compatible change.
With this change, InitializeCpuInterruptHandlers() is removed
completely.
And InitializeCpuExceptionHandlers() fills max 32 entries for PEI
and SMM instance, max 256 entries for DXE instance.
Such behavior matches to the original one.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Additionally removed two useless global variables:
"SPIN_LOCK mDisplayMessageSpinLock" from SMM instance.
"UINTN mEnabledInterruptNum" from DXE instance.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Today the DXE instance allocates code page and then copies the IDT
vectors to the allocated code page. Then it fixes up the vector number
in the IDT vector.
But if we update the NASM file to generate 256 IDT vectors, there is
no need to do the copy and fix-up.
A side effect is 4096 bytes (HOOKAFTER_STUB_SIZE * 256) is used for
256 IDT vectors while 32 IDT vectors only require 512 bytes without
this change, in following library instances:
1. 32bit SecPeiCpuExceptionHandlerLib and PeiCpuExceptionHandlerLib
2. 64bit PeiCpuExceptionHandlerLib
But considering the code logic simplification, 3.5K extra space is
not a big deal.
If 3.5K is too much, we can enhance the code further to generate 32
vectors for above mentioned library instances.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Acked-by: Eric Dong <eric.dong@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3918
This reverts commit 88da06ca76.
This commit triggers the ASSERT in Non-Td guest.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF? https://bugzilla.tianocore.org/show_bug.cgi?id=3912
UefiCpuPkg define a new Protocol with the new services
SmmWaitForAllProcessor(), which can be used by SMI handler
to optionally wait for other APs to complete SMM rendezvous in
relaxed AP mode.
VariableSmm and VariableStandaloneMM driver in MdeModulePkg need
to use this services but MdeModulePkg can't depend on UefiCpuPkg.
Thus, the solution is moving SmmCpuRendezvouslib.h from UefiCpuPkg
to MdePkg and creating SmmCpuRendezvousLib NullLib version
implementation in MdePkg as dependency for the pkg that can't
depend on UefiCpuPkg.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Michael Kubacki <mikuback@linux.microsoft.com>
Cc: Siyuan Fu <siyuan.fu@intel.com>
Signed-off-by: Zhihao Li <zhihao.li@intel.com>
Acked-by: Liming Gao <gaoliming@byosoft.com.cn>
There are two libraries: MdePkg/CpuLib and UefiCpuPkg/UefiCpuLib and
UefiCpuPkg/UefiCpuLib will be merged to MdePkg/CpuLib. To avoid build
failure, add CpuLib dependency to all modules that depend on UefiCpuLib.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Yu Pu <yu.pu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3711
Per SDM, changing the mode of APIC timer (from one-shot to periodic or
vice versa) by writing to the timer LVT entry does not start the timer.
To start the timer, it is necessary to write to the initial-count
register.
If initial-count is wrote before mode change, it's possible that timer
expired before the mode change. Thus failing the periodic mode.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
RFC: https://bugzilla.tianocore.org/show_bug.cgi?id=3429
MMIO region in Tdx guest is set with PcdTdxSharedBitMask in TdxDxe's
entry point. In SEV guest the page table entries is set with
PcdPteMemoryEncryptionAddressOrMask when creating 1:1 identity table.
So the AddressEncMask in GetPageTableEntry (@CpuPageTable.c) is either
PcdPteMemoryEncryptionAddressOrMask (in SEV guest), or
PcdTdxSharedBitMask (in TDX guest), or all-0 (in Legacy guest).
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
RFC: https://bugzilla.tianocore.org/show_bug.cgi?id=3429
In TDVF BSP and APs are simplified. BSP is the vCPU-0, while the others
are treated as APs.
So MP intialization is rather simple. ApWorker is not supported, BSP is
always the working processor, while the APs are just in a
wait-for-precedure state.
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
RFC: https://bugzilla.tianocore.org/show_bug.cgi?id=3429
MSR is accessed in BaseXApicX2ApicLib. In TDX some MSRs are accessed
directly from/to CPU. Some should be accessed via explicit requests
from the host VMM using TDCALL(TDG.VP.VMCALL). This is done by the
help of TdxLib.
Please refer to [TDX] Section 18.1
TDX: https://software.intel.com/content/dam/develop/external/us/en/
documents/tdx-module-1.0-public-spec-v0.931.pdf
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
RFC: https://bugzilla.tianocore.org/show_bug.cgi?id=3429
Add base support to handle #VE exceptions. Update the common exception
handlers to invoke the VmTdExitHandleVe () function of the VmgExitLib
library when a #VE is encountered. A non-zero return code will propagate
to the targeted exception handler.
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
RFC: https://bugzilla.tianocore.org/show_bug.cgi?id=3429
VmgExitLib performs the necessary processing to handle a #VC exception.
VmgExitLibNull is a NULL instance of VmgExitLib which provides a
default limited interface. In this commit VmgExitLibNull is extended to
handle a #VE exception with a default limited interface. A full feature
version of #VE handler will be created later.
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3870
The new algorithm searches FFS3 GUID first and then FFS2 GUID at
every 4KB address in the top 16MB just below 4GB.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Harry Han <harry.han@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Reviewed-by: Min Xu <min.m.xu@intel.com>
Signed-off-by: Ted Kuo <ted.kuo@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3862
The new algorithm searches BFV address with FFS3 GUID first.
If not found, it will search BFV address with FFS2 GUID.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Harry Han <harry.han@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Signed-off-by: Ted Kuo <ted.kuo@intel.com>
To keep the declaration same with definition, remove the last optional
in declaration of WakeUpAP.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Wenyi Xie <xiewenyi2@huawei.com>
REF? https://bugzilla.tianocore.org/show_bug.cgi?id=3815
This patch define a new Protocol with the new services
SmmWaitForAllProcessor(), which can be used by SMI handler
to optionally wait for other APs to complete SMM rendezvous in
relaxed AP mode.
A new library SmmCpuRendezvousLib is provided to abstract the service
into library API to simple SMI handler code.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Siyuan Fu <siyuan.fu@intel.com>
Cc: Zhihao Li <zhihao.li@intel.com>
Signed-off-by: Zhihao Li <zhihao.li@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3790
Replace Opcode with the corresponding instructions.
The code changes have been verified with CompareBuild.py tool, which
can be used to compare the results of two different EDK II builds to
determine if they generate the same binaries.
(tool link: https://github.com/mdkinney/edk2/tree/sandbox/CompareBuild)
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3683
TCG specification says BIOS should extend measurement of microcode to TPM.
However, reference BIOS is not doing this. BIOS shall extend measurement of
microcode to TPM.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Min M Xu <min.m.xu@intel.com>
Cc: Qi Zhang <qi1.zhang@intel.com>
Signed-off-by: Longlong Yang <longlong.yang@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3275
Use the SEV-SNP AP Creation NAE event to create and launch APs under
SEV-SNP. This capability will be advertised in the SEV Hypervisor
Feature Support PCD (PcdSevEsHypervisorFeatures).
Cc: Michael Roth <michael.roth@amd.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ray Ni <ray.ni@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
During AP bringup, just after switching to long mode, APs will do some
cpuid calls to verify that the extended topology leaf (0xB) is available
so they can fetch their x2 APIC IDs from it. In the case of SEV-ES,
these cpuid instructions must be handled by direct use of the GHCB MSR
protocol to fetch the values from the hypervisor, since a #VC handler
is not yet available due to the AP's stack not being set up yet.
For SEV-SNP, rather than relying on the GHCB MSR protocol, it is
expected that these values would be obtained from the SEV-SNP CPUID
table instead. The actual x2 APIC ID (and 8-bit APIC IDs) would still
be fetched from hypervisor using the GHCB MSR protocol however, so
introducing support for the SEV-SNP CPUID table in that part of the AP
bring-up code would only be to handle the checks/validation of the
extended topology leaf.
Rather than introducing all the added complexity needed to handle these
checks via the CPUID table, instead let the BSP do the check in advance,
since it can make use of the #VC handler to avoid the need to scan the
SNP CPUID table directly, and add a flag in ExchangeInfo to communicate
the result of this check to APs.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ray Ni <ray.ni@intel.com>
Suggested-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3275
An SEV-SNP guest requires that the physical address of the GHCB must
be registered with the hypervisor before using it. See the GHCB
specification section 2.3.2 for more details.
Cc: Michael Roth <michael.roth@amd.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ray Ni <ray.ni@Intel.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3275
Version 2 of the GHCB specification added a new VMGEXIT that the guest
could use for querying the hypervisor features. One of the immediate
users for it will be an AP creation code. When SEV-SNP is enabled, the
guest can use the newly added AP_CREATE VMGEXIT to create the APs.
The MpInitLib will check the hypervisor feature, and if AP_CREATE is
available, it will use it.
See GHCB spec version 2 for more details on the VMGEXIT.
Cc: Michael Roth <michael.roth@amd.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ray Ni <ray.ni@Intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3275
Previous commit introduced a generic confidential computing PCD that can
determine whether AMD SEV-ES is enabled. Update the MpInitLib to drop the
PcdSevEsIsEnabled in favor of PcdConfidentialComputingAttr.
Cc: Michael Roth <michael.roth@amd.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ray Ni <ray.ni@intel.com>
Suggested-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3275
Move all the SEV specific function in AmdSev.c.
No functional change intended.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Suggested-by: Jiewen Yao <Jiewen.yao@intel.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3737
Apply uncrustify changes to .c/.h files in the UefiCpuPkg package
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3767
Update use of DEBUG_CODE(Expression) if Expression is a complex code
block with if/while/for/case statements that use {}.
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3760
Update all use of ', OPTIONAL' to ' OPTIONAL,' for function params.
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3739
Update all use of EFI_D_* defines in DEBUG() macros to DEBUG_* defines.
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
When CET shadow stack feature is enabled, it needs to use IST for the
exceptions, and uses interrupt shadow stack for the stack switch.
Shadow stack should be 32 bytes aligned.
Check IST field, when clear shadow stack token busy bit when using retf.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3728
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3698
Lots of code relies on CPU Family/Model/Stepping for different logics.
The change adds two APIs for such needs.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
When using UT_ASSERT_EQUAL() on a pointer value, it must be
cast to UINTN. This follows the samples provided with the
UnitTestFrameworkPkg.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3634
The memory allocated through "PeiAllocatePool" is located in HOB, and
in DXE phase, the HOB will be migrated to a different location.
After the migration, the data stored in the HOB stays the same, but the
address of pointer to the memory(such as the pointers in ACPI_CPU_DATA
structure) changes, which may cause "PiSmmCpuDxeSmm" driver can't find
the memory(the pointers in ACPI_CPU_DATA structure) that allocated in
"PeiRegisterCpuFeaturesLib", so use "PeiAllocatePages" to allocate
memory instead.
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3492
Currently SecCore.inf having the resetvector code under IA32. if the
user wants to use both SecCore and UefiCpuPkg ResetVector it's not
possible, since SecCore and ResetVector(VTF0.INF/ResetVector.inf)
are sharing the same GUID which is BFV. to overcome this issue we can
create the Duplicate version of the SecCore.inf as SecCoreNative.inf
which contains pure SecCore Native functionality without resetvector.
SecCoreNative.inf should have the Unique GUID so that it can be used
along with UefiCpuPkg ResetVector in there implementation.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Harry Han <harry.han@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Cc: Digant H Solanki <digant.h.solanki@intel.com>
Cc: Sangeetha V <sangeetha.v@intel.com>
Signed-off-by: Ashraf Ali S <ashraf.ali.s@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3473
X64 Reset Vector Code can access the memory range till 4GB using the
Linear-Address Translation to a 2-MByte Page, when user wants to use
more than 4G using 2M Page it will leads to use more number of Page
table entries. using the 1-GByte Page table user can use more than
4G Memory by reducing the page table entries using 1-GByte Page,
this patch attached can access memory range till 512GByte via Linear-
Address Translation to a 1-GByte Page.
Build Tool: if the nasm is not found it will throw Build errors like
FileNotFoundError: [WinError 2]The system cannot find the file specified
run the command wil try except block to get meaningful error message
Test Result: Tested in both Simulation environment and Hardware
both works fine without any issues.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Harry Han <harry.han@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Cc: Sangeetha V <sangeetha.v@intel.com>
Cc: Rangasai V Chaganty <rangasai.v.chaganty@intel.com>
Cc: Sahil Dureja <sahil.dureja@intel.com>
Signed-off-by: Ashraf Ali S <ashraf.ali.s@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3621
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3631
Current CPU feature initialization design:
During normal boot, CpuFeaturesPei module (inside FSP) initializes the
CPU features. During S3 boot, CpuFeaturesPei module does nothing, and
CpuSmm driver (in SMRAM) initializes CPU features instead.
This code change prevents CpuSmm driver from re-initializing CPU
features during S3 resume if CpuFeaturesPei module has done the same
initialization.
In addition, EDK2 contains DxeIpl PEIM that calls S3RestoreConfig2 PPI
during S3 boot and this PPI eventually calls CpuSmm driver (in SMRAM) to
initialize the CPU features, so "EDK2 + FSP" does not have the CPU
feature initialization issue during S3 boot. But "coreboot" does not
contain DxeIpl PEIM and the issue appears, unless
"PcdCpuFeaturesInitOnS3Resume" is set to TRUE.
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3621
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3631
Refactor initialization of CPU features during S3 resume.
In addition, the macro ACPI_CPU_DATA_STRUCTURE_UPDATE is used to fix
incompatibility issue caused by ACPI_CPU_DATA structure update. It will
be removed after all the platform code uses new ACPI_CPU_DATA structure.
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3506
Before executing the nasm command, added print statement to know what
commands are executing.
before printing the output file need check the status of command which
is executed. if the status is 0 then only print the output file name.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Harry Han <harry.han@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Cc: Sangeetha V <sangeetha.v@intel.com>
Signed-off-by: Ashraf Ali S <ashraf.ali.s@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3506
Build Scrips for Reset Vector currently based on Python 2
which is already EOL, needs to modify the build script based on
Python 3
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Debkumar De <debkumar.de@intel.com>
Cc: Harry Han <harry.han@intel.com>
Cc: Catharine West <catharine.west@intel.com>
Cc: Sangeetha V <sangeetha.v@intel.com>
Signed-off-by: Ashraf Ali S <ashraf.ali.s@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2956
In functions ReadSaveStateRegisterByIndex and WriteSaveStateRegister:
* check width > 4 instead of >= 4 when writing upper 32 bytes.
- This improves the code but will not affect functionality.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Mark Wilson <Mark.Wilson@amd.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3584
Function AsmCpuid should first check the value for Basic CPUID Information.
The fix is to update the mPatchCetSupported judgment statement.
Signed-off-by: Wenxing Hou <wenxing.hou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Sheng W <w.sheng@intel.com>
Cc: Yao Jiewen <jiewen.yao@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3508
Sort the CpuCacheInfo array by CPU package ID, core type, cache level
and cache type.
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>