Commit Graph

118 Commits

Author SHA1 Message Date
kenlautner 13fad60156 UefiCpuPkg: Fix unchecked returns and potential integer overflows
Resolves several issues in UefiCpuPkg related to:

1. Unchecked returns leading to potential NULL or uninitialized access.
2. Potential unchecked integer overflows.
3. Incorrect comparison between integers of different sizes.

Co-authored-by: kenlautner <85201046+kenlautner@users.noreply.github.com>
Signed-off-by: Chris Fernald <chfernal@microsoft.com>
2024-11-15 17:50:21 +00:00
Jiaxin Wu a232e0cd2f UefiCpuPkg/PiSmmCpuDxeSmm: Save and restore CR2 only if SmiProfile enable
A page fault (#PF) that triggers an update to the page table only occurs
if SmiProfile is enabled. Therefore, it is necessary to save and restore
the CR2 register if SmiProfile is configured to be enabled.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-10-16 04:06:42 +00:00
Jiaxin Wu b437b5ca4c UefiCpuPkg/PiSmmCpuDxeSmm: Remove RestrictedMemoryAccess check for MM CPU
The PcdCpuSmmRestrictedMemoryAccess is declared as either a dynamic or fixed
PCD. It is not recommended for use in the MM CPU driver.

Furthermore, IsRestrictedMemoryAccess() is only needed for SMM. Therefor,
there is no need for MM to consume the PcdCpuSmmRestrictedMemoryAccess.

So, this patch is to add the SMM specific file for its own functions, with
the change, the dependency of the MM CPU driver on
PcdCpuSmmRestrictedMemoryAccess can be removed.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06 08:41:49 +00:00
Jiaxin Wu b4820f2d65 UefiCpuPkg/PiSmmCpuDxeSmm: Clean mCpuSmmRestrictedMemoryAccess
Currently, mCpuSmmRestrictedMemoryAccess is only used by the
IsRestrictedMemoryAccess(). And IsRestrictedMemoryAccess() can
consume the PcdCpuSmmRestrictedMemoryAccess directly. Therefore,
mCpuSmmRestrictedMemoryAccess can be cleaned to simply the code
logic.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06 08:41:49 +00:00
Jiaxin Wu f6eb069e17 UefiCpuPkg/PiSmmCpuDxeSmm: Deadloop if PFAddr is not supported by system
Deadloop if PFAddr is not supported by system, no need check SMM CPU
RestrictedMemory access enable or not.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06 08:41:49 +00:00
Jiaxin Wu c8ce84d067 UefiCpuPkg/PiSmmCpuDxeSmm: Always save and restore CR2
Following the commit 9f29fbd3, full mapping SMM page table is always
created regardless the value of the PcdCpuSmmRestrictedMemoryAccess.
Consequently, a page fault (#PF) that triggers an update to the page
table occurs only when SmiProfile is enabled. Therefore, it is
necessary to save and restore the CR2 register when SmiProfile is
configured to be enabled.

And the operation of saving and restoring CR2 is considered to be
not heavy operation compared to the saving and restoring of CR3.
As a result, the condition check for SmiProfile has been removed,
and CR2 is now saved and restored unconditionally, without the need
for additional condition checks.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06 08:41:49 +00:00
Jiaxin Wu 897284d47d UefiCpuPkg/PiSmmCpuDxeSmm: Fix IsSmmCommBufferForbiddenAddress check
SmiPFHandler depends on the IsSmmCommBufferForbiddenAddress() to do
the forbidden address check:
For SMM, verifying whether an address is forbidden is necessary only
when RestrictedMemoryAccess is enabled.
For MM, all accessible address is recorded in the ResourceDescriptor
HOB, so no need check the RestrictedMemoryAccess is enabled or not.

This patch is to move RestrictedMemoryAccess check into SMM
IsSmmCommBufferForbiddenAddress to align with above behavior. With
the change, SmiPFHandler doesn't need to check the
RestrictedMemoryAccess enable or not.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-09-06 08:41:49 +00:00
Jiaxin Wu 2e6ca59e33 UefiCpuPkg/PiSmmCpuDxeSmm: Avoid PcdCpuSmmProfileEnable check in MM
For MM, gMmProfileDataHobGuid Memory Allocation HOB is defined to
indicate SMM profile feature enabled or not. If the HOB exist, SMM
profile base address and size will be returned in the HOB, so no need
to consume the PcdCpuSmmProfileEnable feature PCD to check enable or
disable.

To achieve above purpose, Add the IsSmmProfileEnabled () function.
With this change, Both MM and SMM can use the new function for
SMM profile feature check.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
2024-08-28 15:25:27 +00:00
Jiaxin Wu 3690d30a6e UefiCpuPkg/PiSmmCpuDxeSmm: Check logging PF address for MM
This patch is to make sure only logging PF address for MM
can run into the SmmProfilePFHandler.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
2024-08-28 15:25:27 +00:00
Jiaxin Wu cd29383f77 UefiCpuPkg/PiSmmCpuDxeSmm: Rename PiSmmCpuDxeSmm.h to PiSmmCpuCommon.h
Rename the file PiSmmCpuDxeSmm.h to PiSmmCpuCommon.h to facilitate
common usage in both SMM and MM. The renamed file PiSmmCpuCommon.h
will be utilized for both modes in subsequent patches.

No function impact.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Dun Tan <dun.tan@intel.com>
Cc: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Wei6 Xu <wei6.xu@intel.com>
Cc: Yuanhao Xie <yuanhao.xie@intel.com>
2024-08-28 15:25:27 +00:00
Dun Tan 5d43165ff8 UefiCpuPkg: rename and simplify IsAddressValid function
In this commit, we rename IsAddressValid function to
IsSmmProfilePFAddressAbove4GValid and remove unneeded
code logic in it.

Currently, IsAddressValid is only used in the function
RestorePageTableAbove4G. It's used to identify if a SMM
profile PF address above 4G is inside mProtectionMemRange
or not. So we can remove the PcdCpuSmmProfileEnable FALSE
condition related code logic in it. Also the function name
is change to be more detailed and specific.

Signed-off-by: Dun Tan <dun.tan@intel.com>
2024-08-05 06:59:09 +00:00
Dun Tan 8b8ac5d986 UefiCpuPkg: rename the SmiDefaultPFHandler function
Rename SmiDefaultPFHandler to SmiProfileMapPFAddress
and move the implementation to SmmProfileArch.c since
it only will be used when SMM profile is enabled.

Signed-off-by: Dun Tan <dun.tan@intel.com>
2024-08-05 06:59:09 +00:00
Dun Tan cae90a8390 UefiCpuPkg: Remove duplicate code in SmiPfHandler
In this commit, we remove duplicate CpuDeadLoop in
SmiPfHandler where mCpuSmmRestrictedMemoryAccess is
TRUE.
With last commit, we always call CpuDeadLoop if SMM
profile is disabled. Then the CpuDeadLoop calling
for the condition (mCpuSmmRestrictedMemoryAccess &&
IsSmmCommBufferForbiddenAddress (PFAddress)) is not
needed anymore. We also modify the IA32 related code
to be aligned with X64.

Signed-off-by: Dun Tan <dun.tan@intel.com>
2024-08-05 06:59:09 +00:00
Dun Tan b5c9bbff8e UefiCpuPkg:CpuDeadLoop in SmiPFHandler if SMM profile is disabled
Always call CpuDeadLoop() in SmiPFHandler if SMM
profile is disabled.

Previously, when PcdCpuSmmRestrictedMemoryAccess is
FALSE, SMM page table only covers [0, 4g]. When code
access to range above 4g happens, SmiPFHandler will map
the accessed not-present range to present. After we
always create full mapping page table, the dynamic page
table creation logic is only needed when SMM profile is
enabled. So we use CpuDeadLoop() in SmiPFHandler to cover
the all the PF exception when SMM profile is disabled

Considering that [0, 4g] is always mapped in SMM page
table, we also modify the IA32 SmiPFHandler code to be
aligned with X64 code.

Signed-off-by: Dun Tan <dun.tan@intel.com>
2024-08-05 06:59:09 +00:00
Dun Tan b3631ca944 UefiCpuPkg: remove unnecessary manipulation for smm page table
In this commit, we only set some special bits in paging entry
content when SMM profile is enabled.

Previously, we set Pml4Entry sub-entries number and set the
IA32_PG_PMNT bit for first 4 PdptEntry. It's to make sure that
the paging structures cover [0, 4G] won't be reclaimed during
dynamic page table creation.
In last commit, we always create full mapping SMM page table
regardless PcdCpuSmmRestrictedMemoryAccess. With this change,
we only need to dynamic create SMM page table in smm PF handler
when PcdCpuSmmProfileEnable is TRUE.

So the sub-entries number and IA32_PG_PMNT bit in paging entry
is only needed to set when PcdCpuSmmProfileEnable is TRUE.

Signed-off-by: Dun Tan <dun.tan@intel.com>
2024-08-05 06:59:09 +00:00
Dun Tan 9f29fbd33b UefiCpuPkg: always create full mapping SMM page table
In this commit, we always create full mapping SMM page
table in SmmInitPageTable regardless the value of the
PcdCpuSmmRestrictedMemoryAccess.

Previously, when PcdCpuSmmRestrictedMemoryAccess is false,
only [0, 4G] is mapped in smm page table in SmmInitPageTable.
If the range above 4G is accessed in SMM, SmiPFHandler will
create new paging entry for the accessed range. To simplify
the code logic, we also create full mapping SMM page table
in SmmInitPageTable when PcdCpuSmmRestrictedMemoryAccess is
false. Then we don't need to dynamic create paging entry for
range above 4G except SMM profile is enabled.

The comparison of SMM page table before and after the change
under different configuration are listed here:
1.PcdCpuSmmRestrictedMemoryAccess is TRUE
     No change
2.PcdCpuSmmRestrictedMemoryAccess is FALSE and
  PcdCpuSmmProfileEnable is TRUE
     Before: the SMM page table when ReadyToLock covers
        1. SMRAM range 2.SMM profile range
        3. MMIO range below 4G
     After: the SMM page table when ReadyToLock covers
        1. SMRAM range 2.SMM profile range
        3. MMIO range below 4G and above 4G
3.PcdCpuSmmRestrictedMemoryAccess is FALSE and
  PcdCpuSmmProfileEnable is FALSE
     Before: the SMM page table when ReadyToLock covers
        [0, 4G]
     After: the SMM page table when ReadyToLock covers
        [0, MaxSupportPhysicalAddress]

Signed-off-by: Dun Tan <dun.tan@intel.com>
2024-08-05 06:59:09 +00:00
Jiaxin Wu 24a375fcdd UefiCpuPkg/PiSmmCpuDxeSmm: Avoid use global variable in InitSmmS3Cr3
This patch is to avoid use global variable in InitSmmS3Cr3. No
function impact.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-08-02 09:15:25 +00:00
Jiaxin Wu 87d3a6272c UefiCpuPkg/PiSmmCpuDxeSmm: Iterate page table to find proper entry
Iterate through the page table to find the appropriate page table
entry for page creation if one of the following cases is met:
1) StartBit > EndBit: The PageSize of current entry is bigger than
the platform-specified PageSize granularity.
2) IA32_PG_P bit is 0 & IA32_PG_PS bit is not 0: The current entry
is present and it's a non-leaf entry.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-08-02 05:13:42 +00:00
Jiaxin Wu 24f8b97a9d UefiCpuPkg/PiSmmCpuDxeSmm: Remove assert check for PDE entry not exist
If 2MB-page is selected, PDE entry might exist, it's incorrect to assert
it's not exist. Detailed see blow case analysis (it's similar case if
address exceeds 4G):

Assume the Default Page table has covered below 6M size range:
[0000000000001000, 0000000000601000)
Then, with PageTableMap API, below Page table entry will be
created if 1G-page or 2M-page mode is selected:
[0000000000001000, 0000000000002000) -->  4K
[0000000000002000, 0000000000003000) -->  4K
...
[00000000001FF000, 0000000000200000) -->  4k
[0000000000200000, 0000000000400000) -->  2M
[0000000000400000, 0000000000600000) -->  2M
[0000000000600000, 0000000000601000) -->  4K
Above will cover 2M aligned address (0000000000600000) in page table. If
Page Fault happen by accessing 0000000000602000, need create the page
entry:
[0000000000602000, 0000000000603000) -->  4K
But PDE entry has been created/existed in page table with 0 PS bit.

So, this patch removes the assert check. The page table entry created
will be the platform-specified PageSize granularity.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-08-02 05:13:42 +00:00
Jiaxin Wu 9d8a5fbd0c UefiCpuPkg/PiSmmCpuDxeSmm: Enable single step after SmmProfile start
There is a bug in the existing code: the single step is always enabled
once the Page Fault (#PF) occurs, but it is only disabled when the SMM
Profile feature actually starts (see DebugExceptionHandler).
If the SMM Profile feature has not been started, this will result in
the single-step mode remaining enabled if a Page Fault occurs.

This patch is to enable the single-step debugging mode by setting the
Trap Flag only after SmmProfile feature starts.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-08-02 05:13:42 +00:00
Dun Tan 341ee5c31b UefiCpuPkg:Remove code to wakeup AP and relocate ap
After the code to load mtrr setting, set register table,
handle APIC setting and Interrupt after INIT-SIPI-SIPI
is moved, the InitializeCpuProcedure() only contains
following code logic:
1.Bsp runs ExecuteFirstSmiInit().
2.Bsp transfers AP to safe hlt-loop

During S3 boot, since APs will be relocated to new safe
buffer by the callback of gEdkiiEndOfS3ResumeGuid in
PeiMpLib, Bsp doesn't need to transfer AP to safe hlt-loop
any more. SmmRestoreCpu() in CpuS3 only needs to runs the
ExecuteFirstSmiInit() on BSP. So remove code to wakeup
AP by INIT-SIPI-SIPI and remove code to relocate ap to
safe hlt-loop.

Signed-off-by: Dun Tan <dun.tan@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
2024-06-04 07:40:27 +00:00
Jiaxin Wu 2727231b0a UefiCpuPkg/PiSmmCpuDxeSmm: Remove SmBases relocation logic
This patch is to remove legacy SmBase relocation in
PiSmmCpuDxeSmm Driver. The responsibility for SmBase
relocation has been transferred to the SmmRelocationInit
interface, which now handles the following tasks:
1. Relocates the SmBase for each processor.
2. Generates the gSmmBaseHobGuid HOB.

As a result of this change, the PiSmmCpuDxeSmm driver's
role in SMM environment setup is simplified to:
1. Utilize the gSmmBaseHobGuid to determine the SmBase.
2. Perform the ExecuteFirstSmiInit() to do early SMM
initialization.

Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
2024-05-08 01:53:58 +00:00
Dun Tan db59ff333d UefiCpuPkg:Limit PhysicalAddressBits in special case
When creating smm page table, limit maximum
supported physical addresses bits returned by
CalculateMaximumSupportAddress() to 47 if
5-Level Paging is disabled.

This commit is to avoid issue that more than
47-bit physical addresses are requested in smm
page table when 5-level paging is disabled.
4-level paging supports translating 48-bit
linear addresses to 52-bit physical addresses.
Since linear addresses are sign-extended,
linear-address space of 4-level paging is:
[0, 2^47-1] and
[0xffff8000_00000000, 0xffffffff_ffffffff].
So only [0, 2^47-1] linear-address range maps
to the identical physical-address range when
5-Level paging is disabled.

Signed-off-by: Dun Tan <dun.tan@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
2024-01-15 01:46:36 +00:00
Sheng Wei 553dfb0f57 UefiCpuPkg: Backup and Restore MSR IA32_U_CET in SMI handler.
OS may enable CET-IBT feature by set MSR IA32_U_CET.bit2.
If IA32_U_CET.bit2 is set, CPU is in WAIT_FOR_ENDBRANCH state and
 the next assemble code is not ENDBR, it will trigger #CP exception
 when set CR4.CET bit.
SMI handler needs to backup MSR IA32_U_CET and clear MSR IA32_U_CET
 before set CR4.CET bit,
And SMI handler needs to restore MSR IA32_U_CET when exit SMI handler.

Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Wu Jiaxin <jiaxin.wu@intel.com>
Cc: Tan Dun <dun.tan@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
2023-12-07 09:43:43 +00:00
Sheng Wei fd1dd8568c UefiCpuPkg: Only change CR4.CET bit for enable and disable CET.
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Wu Jiaxin <jiaxin.wu@intel.com>
Cc: Tan Dun <dun.tan@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
2023-12-07 09:43:43 +00:00
Sheng Wei 3018685da8 UefiCpuPkg: Use CET macro definitions in Cet.inc for SmiEntry.nasm files.
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Wu Jiaxin <jiaxin.wu@intel.com>
Cc: Tan Dun <dun.tan@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
2023-12-07 09:43:43 +00:00
Sheng Wei 04d47a9bf0 UefiCpuPkg: Use macro CR4_CET_BIT to replace hard code value in Cet.nasm.
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Wu Jiaxin <jiaxin.wu@intel.com>
Cc: Tan Dun <dun.tan@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
2023-12-07 09:43:43 +00:00
Dun Tan b4dde1ae6a UefiCpuPkg: Use GenSmmPageTable() to create Smm S3 page table
Use GenSmmPageTable() to create both IA32 and X64 Smm S3
page table.

Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
2023-06-30 11:07:40 +05:30
Dun Tan 701b5797b2 UefiCpuPkg: Add GenSmmPageTable() to create smm page table
This commit is code refinement to current smm pagetable generation
code. Add a new GenSmmPageTable() API to create smm page table
based on the PageTableMap() API in CpuPageTableLib. Caller only
needs to specify the paging mode and the PhysicalAddressBits to map.
This function can be used to create both IA32 pae paging and X64
5level, 4level paging.

Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
2023-06-30 11:07:40 +05:30
Dun Tan d706d9c64a UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h
Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h and remove
extern for mSmmShadowStackSize in c files to simplify code.

Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
2023-06-30 11:07:40 +05:30
Dun Tan 2d212083d0 UefiCpuPkg: Use CpuPageTableLib to convert SMM paging attribute.
Simplify the ConvertMemoryPageAttributes API to convert paging
attribute by CpuPageTableLib. In the new API, it calls
PageTableMap() to update the page attributes of a memory range.
With the PageTableMap() API in CpuPageTableLib, we can remove
the complicated page table manipulating code.

Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
2023-06-30 11:07:40 +05:30
Tan, Dun 72a9386f67 UefiCpuPkg: Simplify the code to set smm page table as RO
Simplify the code to set memory used by smm page table as RO.
Since memory used by smm page table are in PageTablePool list,
we only need to set all PageTablePool as ReadOnly in smm page
table itself. Also, we only need to flush tlb once after
setting all page table pool as Read Only.

Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
2022-12-21 11:13:48 +00:00
duntan b822be1a20 UefiCpuPkg/PiSmmCpuDxeSmm: Introduce page table pool mechanism
Introduce page table pool mechanism for smm page table to simplify
page table memory management and protection. This mechanism has been
used in DxeIpl. The basic idea is to allocate a bunch of continuous
pages of memory in advance, and all future page tables consumption
will happen in those pool instead of system memory.
Since we have centralized page tables, we only need to mark all page
table pools as RO, instead of searching page table memory layer by
layer in smm page table. Once current page table pool has been used
up, another memory pool will be allocated and the new pool will also
be set as RO if current page table memory has been marked as RO.

Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
2022-12-21 11:13:48 +00:00
Robert Guenzel 1c75bf3c21 UefiCpuPkg: Bug fix in 5LPage handling
When build in DEBUG, the code asserts that 5LPage support is there
when the physical address width is larger than 48.
In a RELEASE build it will just force LA57 to 1 in CR4
even if CPUID(7).ECX[16] says it is not supported.

UefiCpuPkg: Bug fix in 5LPage handling

The hang (in the ASSERT) in DEBUG is not warranted as there are
legal configurations with CPUID(7).ECX[16](==LA57)=0
and with a physical address width of larger than 48 (like 52).

This is also supported by this code:
https://github.com/tianocore/edk2/blob/master/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c#L221
There (as long as physical address width is smaller or equal to 52)
any address width above 48 will be reduced to 48 and the
system can and will work without 5LPaging.

The forced setting of LA57 in CR4 (in the absence of LA57 in CPUID(7).ECX)
is a spec violation and should not happen.

Hence the proposed fix
a) removes the assert.
b) only returns TRUE from Is5LevelPagingNeeded if 5LPaging is actually
   supported by HW.

Signed-off-by: Robert Guenzel <robert.guenzel@intel.com>
2022-12-08 10:04:24 +00:00
Dun Tan 7b4754904e UefiCpuPkg/PiSmmCpuDxeSmm: Remove mInternalCr3 in PiSmmCpuDxeSmm
This patch is code refactoring and doesn't change any functionality.
Remove mInternalCr3 in PiSmmCpuDxe pagetable related code. In previous
code, mInternalCr3 is used to pass address of page table which is
different from Cr3 register in different level of SetMemoryAttributes
function. Now remove it and pass the page table base address from the
root function parameter to simplify the code logic.

Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
2022-08-15 05:15:43 +00:00
Jason 2aa107c0aa UefiCpuPkg: Replace Opcode with the corresponding instructions.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3790

Replace Opcode with the corresponding instructions.
The code changes have been verified with CompareBuild.py tool, which
can be used to compare the results of two different EDK II builds to
determine if they generate the same binaries.
(tool link: https://github.com/mdkinney/edk2/tree/sandbox/CompareBuild)

Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
2022-03-01 01:45:47 +00:00
Michael Kubacki 053e878bfb UefiCpuPkg: Apply uncrustify changes
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3737

Apply uncrustify changes to .c/.h files in the UefiCpuPkg package

Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
2021-12-07 17:24:28 +00:00
Sheng, W 455b0347a7 UefiCpuPkg/PiSmmCpuDxeSmm: Use SMM Interrupt Shadow Stack
When CET shadow stack feature is enabled, it needs to use IST for the
 exceptions, and uses interrupt shadow stack for the stack switch.
Shadow stack should be 32 bytes aligned.
Check IST field, when clear shadow stack token busy bit when using retf.

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3728

Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
2021-11-12 12:50:19 +00:00
Sheng Wei 0a6b303dce UefiCpuPkg/ExceptionLib: Conditionally clear shadow stack token busy bit
When enter SMM exception, there will be a stack switch only if the IST
field of the interrupt gate is set. When CET shadow stack feature is
enabled, if there is a stack switch between SMM exception and SMM, the
shadow stack token busy bit needs to be cleared when return from SMM
exception to SMM. In UEFI BIOS, only page fault exception does the stack
swith when SMM shack guard feature is enabled. The condition of clear
shadow stack token busy bit should be SMM stack guard enabled, CET shadows
stack feature enabled and page fault exception.
The shadow stack token should be initialized by UINT64.

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3462

Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Qihua Zhuang <qihua.zhuang@intel.com>
Cc: Daquan Dong <daquan.dong@intel.com>
Cc: Justin Tong <justin.tong@intel.com>
Cc: Tom Xu <tom.xu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
2021-07-06 08:18:21 +00:00
Kun Qin c3dcbce26f UefiCpuPkg: PiSmmCpuDxeSmm: Not to Change Bitwidth During Static Paging
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3300

Current implementation of SetStaticPageTable routine in PiSmmCpuDxeSmm
driver will check a global variable mPhysicalAddressBits, and eventually
cap any value larger than 39 at 39.

This global variable is used in ConvertMemoryPageAttributes, which backs
SmmSetMemoryAttributes and SmmClearMemoryAttributes. Thus for a processor
that supports more than 39 bits width, trying to mark page table regions
higher than 39-bit will always return EFI_UNSUPPROTED.

This change updated the interface of SetStaticPageTable function to take
PhysicalAddressBits as an input parameter, in order to avoid changing/
accessing the global variable.

Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>

Fixes: 4eee0cc7cc
Signed-off-by: Kun Qin <kuqin12@gmail.com>
2021-04-20 00:32:24 +00:00
Sheng, W efa7f4df0f UefiCpuPkg/PiSmmCpuDxeSmm: Support detect SMM shadow stack overflow
Use SMM stack guard feature to detect SMM shadow stack overflow.

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3280

Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Cc: Roger Feng <roger.feng@intel.com>
2021-04-09 05:33:35 +00:00
Sheng Wei ef91b07388 UefiCpuPkg/PiSmmCpuDxeSmm: Fix SMM stack offset is not correct
In function InitGdt(), SmiPFHandler() and Gen4GPageTable(), it uses
 CpuIndex * mSmmStackSize to get the SMM stack address offset for
 multi processor. It misses the SMM Shadow Stack Size. Each processor
 will use mSmmStackSize + mSmmShadowStackSize in the memory.
It should use CpuIndex * (mSmmStackSize + mSmmShadowStackSize) to get
 this SMM stack address offset. If mSmmShadowStackSize > 0 and multi
 processor enabled, it will get the wrong offset value.
CET shadow stack feature will set the value of mSmmShadowStackSize.

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3237

Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Roger Feng <roger.feng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
2021-03-02 05:11:55 +00:00
Sheng Wei 0930e7ff64 UefiCpuPkg/CpuExceptionHandlerLib: Clear CET shadow stack token busy bit
If CET shadows stack feature enabled in SMM and stack switch is enabled.
When code execute from SMM handler to SMM exception, CPU will check SMM
exception shadow stack token busy bit if it is cleared or not.
If it is set, it will trigger #DF exception.
If it is not set, CPU will set the busy bit when enter SMM exception.
So, the busy bit should be cleared when return back form SMM exception to
SMM handler. Otherwise, keeping busy bit 1 will cause to trigger #DF
exception when enter SMM exception next time.
So, we use instruction SAVEPREVSSP, CLRSSBSY and RSTORSSP to clear the
shadow stack token busy bit before RETF instruction in SMM exception.

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3192

Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Roger Feng <roger.feng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
2021-03-02 05:11:55 +00:00
Sheng Wei 404250c8f7 UefiCpuPkg/PiSmmCpuDxeSmm: Reflect page table depth with page table address
When trying to get page table base, if mInternalCr3 is zero, it will use
 the page table from CR3, and reflect the page table depth by CR4 LA57 bit.
If mInternalCr3 is non zero, it will use the page table from mInternalCr3
 and reflect the page table depth of mInternalCr3 at same time.
In the case of X64, we use m5LevelPagingNeeded to reflect the depth of
 the page table. And in the case of IA32, it will not the page table depth
 information.

This patch is a bug fix when enable CET feature with 5 level paging.
The SMM page tables are allocated / initialized in PiCpuSmmEntry().
When CET is enabled, PiCpuSmmEntry() must further modify the attribute of
 shadow stack pages. This page table is not set to CR3 in PiCpuSmmEntry().
 So the page table base address is set to mInternalCr3 for modifty the
 page table attribute. It could not use CR4 LA57 bit to reflect the
 page table depth for mInternalCr3.
So we create a architecture-specific implementation GetPageTable() with
 2 output parameters. One parameter is used to output the page table
 address. Another parameter is used to reflect if it is 5 level paging
 or not.

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3015

Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
2020-11-18 04:52:26 +00:00
Tom Lendacky 7b7508ad78 UefiCpuPkg: Allow AP booting under SEV-ES
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2198

Typically, an AP is booted using the INIT-SIPI-SIPI sequence. This
sequence is intercepted by the hypervisor, which sets the AP's registers
to the values requested by the sequence. At that point, the hypervisor can
start the AP, which will then begin execution at the appropriate location.

Under SEV-ES, AP booting presents some challenges since the hypervisor is
not allowed to alter the AP's register state. In this situation, we have
to distinguish between the AP's first boot and AP's subsequent boots.

First boot:
 Once the AP's register state has been defined (which is before the guest
 is first booted) it cannot be altered. Should the hypervisor attempt to
 alter the register state, the change would be detected by the hardware
 and the VMRUN instruction would fail. Given this, the first boot for the
 AP is required to begin execution with this initial register state, which
 is typically the reset vector. This prevents the BSP from directing the
 AP startup location through the INIT-SIPI-SIPI sequence.

 To work around this, the firmware will provide a build time reserved area
 that can be used as the initial IP value. The hypervisor can extract this
 location value by checking for the SEV-ES reset block GUID that must be
 located 48-bytes from the end of the firmware. The format of the SEV-ES
 reset block area is:

   0x00 - 0x01 - SEV-ES Reset IP
   0x02 - 0x03 - SEV-ES Reset CS Segment Base[31:16]
   0x04 - 0x05 - Size of the SEV-ES reset block
   0x06 - 0x15 - SEV-ES Reset Block GUID
                   (00f771de-1a7e-4fcb-890e-68c77e2fb44e)

   The total size is 22 bytes. Any expansion to this block must be done
   by adding new values before existing values.

 The hypervisor will use the IP and CS values obtained from the SEV-ES
 reset block to set as the AP's initial values. The CS Segment Base
 represents the upper 16 bits of the CS segment base and must be left
 shifted by 16 bits to form the complete CS segment base value.

 Before booting the AP for the first time, the BSP must initialize the
 SEV-ES reset area. This consists of programming a FAR JMP instruction
 to the contents of a memory location that is also located in the SEV-ES
 reset area. The BSP must program the IP and CS values for the FAR JMP
 based on values drived from the INIT-SIPI-SIPI sequence.

Subsequent boots:
 Again, the hypervisor cannot alter the AP register state, so a method is
 required to take the AP out of halt state and redirect it to the desired
 IP location. If it is determined that the AP is running in an SEV-ES
 guest, then instead of calling CpuSleep(), a VMGEXIT is issued with the
 AP Reset Hold exit code (0x80000004). The hypervisor will put the AP in
 a halt state, waiting for an INIT-SIPI-SIPI sequence. Once the sequence
 is recognized, the hypervisor will resume the AP. At this point the AP
 must transition from the current 64-bit long mode down to 16-bit real
 mode and begin executing at the derived location from the INIT-SIPI-SIPI
 sequence.

 Another change is around the area of obtaining the (x2)APIC ID during AP
 startup. During AP startup, the AP can't take a #VC exception before the
 AP has established a stack. However, the AP stack is set by using the
 (x2)APIC ID, which is obtained through CPUID instructions. A CPUID
 instruction will cause a #VC, so a different method must be used. The
 GHCB protocol supports a method to obtain CPUID information from the
 hypervisor through the GHCB MSR. This method does not require a stack,
 so it is used to obtain the necessary CPUID information to determine the
 (x2)APIC ID.

The new 16-bit protected mode GDT entry is used in order to transition
from 64-bit long mode down to 16-bit real mode.

A new assembler routine is created that takes the AP from 64-bit long mode
to 16-bit real mode.  This is located under 1MB in memory and transitions
from 64-bit long mode to 32-bit compatibility mode to 16-bit protected
mode and finally 16-bit real mode.

Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
2020-08-17 02:46:39 +00:00
Kirkendall, Garrett bdafda8c45 UefiCpuPkg: PiSmmCpuDxeSmm skip MSR_IA32_MISC_ENABLE manipulation on AMD
AMD does not support MSR_IA32_MISC_ENABLE.  Accessing that register
causes and exception on AMD processors.  If Execution Disable is
supported, but if the processor is an AMD processor, skip manipulating
MSR_IA32_MISC_ENABLE[34] XD Disable bit.

Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Garrett Kirkendall <garrett.kirkendall@amd.com>
Message-Id: <20200622131825.1352-5-Garrett.Kirkendall@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
2020-07-07 23:25:16 +00:00
Antoine Coeur ef62da4ff7 UefiCpuPkg/PiSmm: Fix various typos
Fix various typos in comments and documentation.

Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Antoine Coeur <coeur@gmx.fr>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Signed-off-by: Philippe Mathieu-Daude <philmd@redhat.com>
Message-Id: <20200207010831.9046-78-philmd@redhat.com>
2020-02-10 22:30:07 +00:00
Shenglei Zhang 9c33f16f8c UefiCpuPkg: Update the coding styles
In MpLib.c, remove the white space on a new line.
In PageTbl.c and PiSmmCpuDxeSmm.h, update the comment style.

Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
2019-12-04 06:00:24 +00:00
Ray Ni 86ad762fa7 UefiCpuPkg/PiSmmCpu: Enable 5L paging only when phy addr line > 48
Today's behavior is to enable 5l paging when CPU supports it
(CPUID[7,0].ECX.BIT[16] is set).

The patch changes the behavior to enable 5l paging when two
conditions are both met:
1. CPU supports it;
2. The max physical address bits is bigger than 48.

Because 4-level paging can support to address physical address up to
2^48 - 1, there is no need to enable 5-level paging with max
physical address bits <= 48.

Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
2019-09-13 16:20:54 +08:00
Ray Ni 79186ddcdd UefiCpuPkg/PiSmmCpu: Restrict access per PcdCpuSmmRestrictedMemoryAccess
Today's behavior is to always restrict access to non-SMRAM regardless
the value of PcdCpuSmmRestrictedMemoryAccess.

Because RAS components require to access all non-SMRAM memory, the
patch changes the code logic to honor PcdCpuSmmRestrictedMemoryAccess
so that only when the PCD is true, the restriction takes affect and
page table memory is also protected.

Because IA32 build doesn't reference this PCD, such restriction
always takes affect in IA32 build.

Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
2019-09-04 01:00:10 +08:00