Some features like RAS need to use processor extended information
under smm, So add code to support it
Signed-off-by: Hongbin1 Zhang <hongbin1.zhang@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4182
Removes SmmCpuFeaturesReadSaveStateRegister and
SmmCpuFeaturesWirteSaveStateRegister function from
SmmCpuFeaturesLib library.
MmSaveStateLib library replaces the functionality of the above
functions.
Platform old/new need to use MmSaveStateLib library to read/write save
state registers.
Current implementation supports Intel and AMD.
Cc: Paul Grimes <paul.grimes@amd.com>
Cc: Abner Chang <abner.chang@amd.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Signed-off-by: Abdul Lateef Attar <abdattar@amd.com>
Reviewed-by: Abner Chang <abner.chang@amd.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Remove unnecessary function SetNotPresentPage(). We can directly
use ConvertMemoryPageAttributes to set a range to non-present.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Sort mSmmCpuSmramRanges after get the SMRAM info in
FindSmramInfo() function.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
MP procedures are those procedures that run in every CPU thread.
The EDKII perf infra is not MP safe so it doesn't support to be called
from those MP procedures.
The patch adds SMM MP perf-logging support in SmmMpPerf.c.
The following procedures are perf-logged:
* SmmInitHandler
* SmmCpuFeaturesRendezvousEntry
* PlatformValidSmi
* SmmCpuFeaturesRendezvousExit
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4424
In Relaxed-AP Sync Mode, BSP will not wait for all Aps arrive. However,
PerformRemainingTasks() needs to wait all Aps arrive before calling
SetMemMapAttributes and ConfigSmmCodeAccessCheck() when mSmmReadyToLock
is true. In SetMemMapAttributes(), SmmSetMemoryAttributesEx() will call
FlushTlbForAll() that need to start up the aps. So it need to let all
aps arrive. Same as SetMemMapAttributes(), ConfigSmmCodeAccessCheck()
also will start up the aps.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhihao Li <zhihao.li@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
BufferPages is UINTN, so we need "%Lu" when printing it to avoid
it being truncated. Also cast to UINT64 to make sure it works
for 32bit builds too.
Fixes: 4f441d024b ("UefiCpuPkg/PiSmmCpuDxeSmm: fix error handling")
Reported-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
ASSERT() is not proper handling of allocation failures, it gets compiled
out on RELEASE builds. Print a message and enter dead loop instead.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
It's highly unlikely the code ever runs on processors which are
almost 30 years old. Drop the code handling them.
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=4345
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4337
Existing SMBASE Relocation is in the PiSmmCpuDxeSmm driver, which
will relocate the SMBASE of each processor by setting the SMBASE
field in the saved state map (at offset 7EF8h) to a new value.
The RSM instruction reloads the internal SMBASE register with the
value in SMBASE field when each time it exits SMM. All subsequent
SMI requests will use the new SMBASE to find the starting address
for the SMI handler (at SMBASE + 8000h).
Due to the default SMBASE for all x86 processors is 0x30000, the
APs' 1st SMI for rebase has to be executed one by one to avoid
the processors over-writing each other's SMM Save State Area (see
existing SmmRelocateBases() function), which means the next AP has
to wait for the previous AP to finish its 1st SMI, then it can call
into its 1st SMI for rebase via Smi Ipi command, thus leading the
existing SMBASE Relocation has to be running in series. Besides, it
needs very complex code to handle the AP exit semaphore
(mRebased[Index]), which will hook return address of SMM Save State
so that semaphore code can be executed immediately after AP exits
SMM for SMBASE relocation (see existing SemaphoreHook() function).
With SMM Base Hob support, PiSmmCpuDxeSmm does not need the RSM
instruction to do the SMBASE Relocation. SMBASE Register for each
processors have already been programmed and all SMBASE address have
recorded in SMM Base Hob. So the same default SMBASE Address
(0x30000) will not be used, thus the processors over-writing each
other's SMM Save State Area will not happen in PiSmmCpuDxeSmm driver.
This way makes the first SMI init can be executed in parallel and
save boot time on multi-core system. Besides, Semaphore Hook code
logic is also not required, which will greatly simplify the SMBASE
Relocation flow.
Mainly changes as below:
* Assume the biggest possibility of tile size is 8k.
* Combine 2 SMIs (gcSmmInitTemplate & gcSmiHandlerTemplate) into one
(gcSmiHandlerTemplate), the new SMI handler needs to run to 2 paths:
one to SmmCpuFeaturesInitializeProcessor(), the other to SMM Core
Entry Point.
* Issue SMI IPI (All Excluding Self SMM IPI + BSP SMM IPI) for first
SMI init before normal SMI sources happen.
* Call SmmCpuFeaturesInitializeProcessor() in parallel.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
This patch is to replace mIsBsp by mBspApicId check.
mIsBsp becomes the local variable (IsBsp), then it can be
checked dynamically in the function. Instead, we define the
mBspApicId, which is to record the BSP ApicId used for
compare in SmmInitHandler. With this change, SmmInitHandler
can be run in parallel during SMM init.
Note:
This patch is the per-prepared work by refining the
SmmInitHandler, then, we can do the next step to
combine 2 SMIs (gcSmmInitTemplate & gcSmiHandlerTemplate)
into one (gcSmiHandlerTemplate), the new SMI handler
will call the SmmInitHandler in parallel to do the init.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4338
No need call InitializeMpSyncData during normal boot SMI init,
because mSmmMpSyncData is NULL at that time. mSmmMpSyncData is
allocated in InitializeMpServiceData, which is invoked after
normal boot SMI init (SmmRelocateBases).
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@Intel.com>
Introduce page table pool mechanism for smm page table to simplify
page table memory management and protection. This mechanism has been
used in DxeIpl. The basic idea is to allocate a bunch of continuous
pages of memory in advance, and all future page tables consumption
will happen in those pool instead of system memory.
Since we have centralized page tables, we only need to mark all page
table pools as RO, instead of searching page table memory layer by
layer in smm page table. Once current page table pool has been used
up, another memory pool will be allocated and the new pool will also
be set as RO if current page table memory has been marked as RO.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4173
Due to more core count increasement, it's hard to reflect all APs
state via AP bitvector support in the register. Actually, SMM CPU
driver doesn't need to check each AP state to know all CPUs in SMI
or not, one alternative method is to check the SMM Delayed & Blocked
AP Count number:
APs in SMI + Blocked Count + Disabled Count >= All supported Aps
(code comments explained why can be > All supported Aps)
With above change, the returned value of "SmmRegSmmEnable" &
"SmmRegSmmDelayed" & "SmmRegSmmBlocked" from SmmCpuFeaturesLib
should be the AP count number within the existing CPU package.
For register that return the bitvector state, require
SmmCpuFeaturesGetSmmRegister() returns count number of all bit per
logical processor within the same package.
For register that return the AP count, require
SmmCpuFeaturesGetSmmRegister() returns the register value directly.
v3:
- Refine the coding style
v2:
- Rename "mPackageBspInfo" to "mPackageFirstThreadIndex"
- Clarify the expected value of "SmmRegSmmEnable" & "SmmRegSmmDelayed" &
"SmmRegSmmBlocked" returned from SmmCpuFeaturesLib.
- Thread: https://edk2.groups.io/g/devel/message/96722
v1:
- Thread: https://edk2.groups.io/g/devel/message/96671
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3737
Apply uncrustify changes to .c/.h files in the UefiCpuPkg package
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3767
Update use of DEBUG_CODE(Expression) if Expression is a complex code
block with if/while/for/case statements that use {}.
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3739
Update all use of EFI_D_* defines in DEBUG() macros to DEBUG_* defines.
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
When CET shadow stack feature is enabled, it needs to use IST for the
exceptions, and uses interrupt shadow stack for the stack switch.
Shadow stack should be 32 bytes aligned.
Check IST field, when clear shadow stack token busy bit when using retf.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3728
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3584
Function AsmCpuid should first check the value for Basic CPUID Information.
The fix is to update the mPatchCetSupported judgment statement.
Signed-off-by: Wenxing Hou <wenxing.hou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Sheng W <w.sheng@intel.com>
Cc: Yao Jiewen <jiewen.yao@intel.com>
Fix various typos in comments and documentation.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Antoine Coeur <coeur@gmx.fr>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Signed-off-by: Philippe Mathieu-Daude <philmd@redhat.com>
Message-Id: <20200207010831.9046-78-philmd@redhat.com>
Today's behavior is to always restrict access to non-SMRAM regardless
the value of PcdCpuSmmRestrictedMemoryAccess.
Because RAS components require to access all non-SMRAM memory, the
patch changes the code logic to honor PcdCpuSmmRestrictedMemoryAccess
so that only when the PCD is true, the restriction takes affect and
page table memory is also protected.
Because IA32 build doesn't reference this PCD, such restriction
always takes affect in IA32 build.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
This reverts commit 30f6148546.
Commit 30f6148546 causes a build failure, when building for IA32:
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c: In function
> 'PerformRemainingTasks':
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c:1440:9: error:
> 'mCpuSmmStaticPageTable' undeclared (first use in this function)
> if (mCpuSmmStaticPageTable) {
"mCpuSmmStaticPageTable" is an X64-only variable. It is defined in
"X64/PageTbl.c", which is not linked into the IA32 binary. We must not
reference the variable in such code that is linked into both IA32 and X64
builds, such as "PiSmmCpuDxeSmm.c".
We have encountered the same challenge at least once in the past:
- https://bugzilla.tianocore.org/show_bug.cgi?id=1593
- commit 37f9fea5b8 ("UefiCpuPkg\CpuSmm: Save & restore CR2 on-demand
paging in SMM", 2019-04-04)
The right approach is to declare a new function in "PiSmmCpuDxeSmm.h", and
to provide two definitions for the function, one in "Ia32/PageTbl.c", and
another in "X64/PageTbl.c". The IA32 implementation should return a
constant value. The X64 implementation should return
"mCpuSmmStaticPageTable". (In the example named above, the functions were
SaveCr2() and RestoreCr2().)
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
[lersek@redhat.com: push revert immediately, due to build breakage that
would have been easy to catch before submitting the patch]
Commit c60d36b4d1
* UefiCpuPkg/SmmCpu: Block access-out only when static paging is used
updated page fault handler to treat SMM access-out as allowed
address when static paging is not used.
But that commit is not complete because the page table is still
updated in SetUefiMemMapAttributes() for non-SMRAM memory. When SMM
code accesses non-SMRAM memory, page fault is still generated.
This patch skips to update page table for non-SMRAM memory and
page table itself.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1937
Add MM Mp Protocol in PiSmmCpuDxeSmm driver.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1521
We scan the SMM code with ROPgadget.
http://shell-storm.org/project/ROPgadget/https://github.com/JonathanSalwan/ROPgadget/tree/master
This tool reports the gadget in SMM driver.
This patch enabled CET ShadowStack for X86 SMM.
If CET is supported, SMM will enable CET ShadowStack.
SMM CET will save the OS CET context at SmmEntry and
restore OS CET context at SmmExit.
Test:
1) test Intel internal platform (x64 only, CET enabled/disabled)
Boot test:
CET supported or not supported CPU
on CET supported platform
CET enabled/disabled
PcdCpuSmmCetEnable enabled/disabled
Single core/Multiple core
PcdCpuSmmStackGuard enabled/disabled
PcdCpuSmmProfileEnable enabled/disabled
PcdCpuSmmStaticPageTable enabled/disabled
CET exception test:
#CF generated with PcdCpuSmmStackGuard enabled/disabled.
Other exception test:
#PF for normal stack overflow
#PF for NX protection
#PF for RO protection
CET env test:
Launch SMM in CET enabled/disabled environment (DXE) - no impact to DXE
The test case can be found at
https://github.com/jyao1/SecurityEx/tree/master/ControlFlowPkg
2) test ovmf (both IA32 and X64 SMM, CET disabled only)
test OvmfIa32/Ovmf3264, with -D SMM_REQUIRE.
qemu-system-x86_64.exe -machine q35,smm=on -smp 4
-serial file:serial.log
-drive if=pflash,format=raw,unit=0,file=OVMF_CODE.fd,readonly=on
-drive if=pflash,format=raw,unit=1,file=OVMF_VARS.fd
QEMU emulator version 3.1.0 (v3.1.0-11736-g7a30e7adb0-dirty)
3) not tested
IA32 CET enabled platform
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yao Jiewen <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1417
Since BaseLib API AsmLfence() is a x86 arch specific API and should be
avoided using in generic codes, this commit replaces the usage of
AsmLfence() with arch-generic API SpeculationBarrier().
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1194
Speculative execution is used by processor to avoid having to wait for
data to arrive from memory, or for previous operations to finish, the
processor may speculate as to what will be executed.
If the speculation is incorrect, the speculatively executed instructions
might leave hints such as which memory locations have been brought into
cache. Malicious actors can use the bounds check bypass method (code
gadgets with controlled external inputs) to infer data values that have
been used in speculative operations to reveal secrets which should not
otherwise be accessed.
It is possible for SMI handler(s) to call EFI_SMM_CPU_PROTOCOL service
ReadSaveState() and use the content in the 'CommBuffer' (controlled
external inputs) as the 'CpuIndex'. So this commit will insert AsmLfence
API to mitigate the bounds check bypass issue within SmmReadSaveState().
For SmmReadSaveState():
The 'CpuIndex' will be passed into function ReadSaveStateRegister(). And
then in to ReadSaveStateRegisterByIndex().
With the call:
ReadSaveStateRegisterByIndex (
CpuIndex,
SMM_SAVE_STATE_REGISTER_IOMISC_INDEX,
sizeof(IoMisc.Uint32),
&IoMisc.Uint32
);
The 'IoMisc' can be a cross boundary access during speculative execution.
Later, 'IoMisc' is used as the index to access buffers 'mSmmCpuIoWidth'
and 'mSmmCpuIoType'. One can observe which part of the content within
those buffers was brought into cache to possibly reveal the value of
'IoMisc'.
Hence, this commit adds a AsmLfence() after the check of 'CpuIndex'
within function SmmReadSaveState() to prevent the speculative execution.
A more detailed explanation of the purpose of commit is under the
'Bounds check bypass mitigation' section of the below link:
https://software.intel.com/security-software-guidance/insights/host-firmware-speculative-execution-side-channel-mitigation
And the document at:
https://software.intel.com/security-software-guidance/api-app/sites/default/files/337879-analyzing-potential-bounds-Check-bypass-vulnerabilities.pdf
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Rename the variable to "gPatchSmmInitStack" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".
The size of the patched source operand is (sizeof (UINTN)).
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its
PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can
simply use the NASM syntax for the following Mixed-Size Jump:
> jmp PROTECT_MODE_CS : dword @32bit
The generated object code for the instruction is unchanged:
> 00000182 66EA5A0000000800 jmp dword 0x8:0x5a
(The NASM manual explains that putting the DWORD prefix after the colon
":" reflects the intent better, since it is the offset that is a DWORD.
Thus, that's what I used. However, both syntaxes are interchangeable,
hence the ndisasm output.)
The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr";
however that's accidental, not inherent:
- Bring LONG_MODE_CODE_SEGMENT from
"UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as
LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32
version as PROTECT_MODE_CS earlier.
- Apply the NASM-native Mixed-Size Jump syntax again, but jump to the
fixed zero offset in LONG_MODE_CS. This will produce no relocation
record at all. Add a label after the instruction.
- Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards
from the label. Because we modify the DWORD offset with a DWORD access,
the segment selector is unharmed in the instruction, and we need not set
it from PiCpuSmmEntry().
According to "objdump --reloc", the X64 version undergoes only the
following relocations, after this patch:
> RELOCATION RECORDS FOR [.text]:
> OFFSET TYPE VALUE
> 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004
> 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004
> 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004
Therefore the patch does not regress
<https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool
chain for UefiCpuPkg with nasm source code").
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Like "gSmmCr4" in the previous patch, "gSmmCr0" is not only used for
machine code patching, but also as a means to communicate the initial CR0
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr0Value" instruction's binary
representation are utilized as normal data too.
In order to get rid of the DB for "mov eax, Cr0Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr0" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr0" variable to
"gPatchSmmCr0" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr0".
This lets us remove the binary (DB) encoding of "mov eax, Cr0Value" in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Unlike "gSmmCr3" in the previous patch, "gSmmCr4" is not only used for
machine code patching, but also as a means to communicate the initial CR4
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr4Value" instruction's binary
representation are utilized as normal data too.
In order to get rid of the DB for "mov eax, Cr4Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr4" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr4" variable to
"gPatchSmmCr4" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr4".
This lets us remove the binary (DB) encoding of "mov eax, Cr4Value" in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmmCr3" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=849
In V2, use "mov rax, strict qword 0" to replace the hard code db.
1. Use lea instruction to get the address instead of mov instruction.
2. Use the dummy address as jmp destination, and add the logic to fix up
the address to the absolute address at boot time.
3. On MpFuncs.nasm, use ExchangeInfo to record InitializeFloatingPointUnits.
This way is same to MpInitLib.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Heap guard makes use of paging mechanism to implement its functionality. But
there's no protocol or library available to change page attribute in SMM mode.
A new protocol gEdkiiSmmMemoryAttributeProtocolGuid is introduced to make it
happen. This protocol provide three interfaces
struct _EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL {
EDKII_SMM_GET_MEMORY_ATTRIBUTES GetMemoryAttributes;
EDKII_SMM_SET_MEMORY_ATTRIBUTES SetMemoryAttributes;
EDKII_SMM_CLEAR_MEMORY_ATTRIBUTES ClearMemoryAttributes;
};
Since heap guard feature need to update page attributes. The page table
should not set to be read-only if heap guard feature is enabled for SMM
mode. Otherwise this feature cannot work.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Originally (before 714c260301),
mPhysicalAddressBits was only defined in X64 PageTbl.c, after
714c260301, mPhysicalAddressBits is
also defined in Ia32 PageTbl.c, then mPhysicalAddressBits is used in
ConvertMemoryPageAttributes() for address check.
This patch is to centralize mPhysicalAddressBits definition to
PiSmmCpuDxeSmm.c from Ia32 and X64 PageTbl.c.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
If PcdCpuHotPlugSupport is TRUE, gSmst->NumberOfCpus will be the
PcdCpuMaxLogicalProcessorNumber. If gSmst->SmmStartupThisAp() is invoked for
those un-existed processors, ASSERT() happened in ConfigSmmCodeAccessCheck().
This fix is to check if ProcessorId is valid before invoke
gSmst->SmmStartupThisAp() in ConfigSmmCodeAccessCheck() and to check if
ProcessorId is valid in InternalSmmStartupThisAp() to avoid unexpected DEBUG
error message displayed.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Consuming PeCoffSerachImageBase() from PeCoffGetEntrypointLib and consuming
DumpCpuContext() from CpuExceptionHandlerLib to replace its own implementation.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
There are cases that the operands of an expression are all with rank less
than UINT64/INT64 and the result of the expression is explicitly cast to
UINT64/INT64 to fit the target size.
An example will be:
UINT32 a,b;
// a and b can be any unsigned int type with rank less than UINT64, like
// UINT8, UINT16, etc.
UINT64 c;
c = (UINT64) (a + b);
Some static code checkers may warn that the expression result might
overflow within the rank of "int" (integer promotions) and the result is
then cast to a bigger size.
The commit refines codes by the following rules:
1). When the expression is possible to overflow the range of unsigned int/
int:
c = (UINT64)a + b;
2). When the expression will not overflow within the rank of "int", remove
the explicit type casts:
c = a + b;
3). When the expression will be cast to pointer of possible greater size:
UINT32 a,b;
VOID *c;
c = (VOID *)(UINTN)(a + b); --> c = (VOID *)((UINTN)a + b);
4). When one side of a comparison expression contains only operands with
rank less than UINT32:
UINT8 a;
UINT16 b;
UINTN c;
if ((UINTN)(a + b) > c) {...} --> if (((UINT32)a + b) > c) {...}
For rule 4), if we remove the 'UINTN' type cast like:
if (a + b > c) {...}
The VS compiler will complain with warning C4018 (signed/unsigned
mismatch, level 3 warning) due to promoting 'a + b' to type 'int'.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
This PCD holds the address mask for page table entries when memory
encryption is enabled on AMD processors supporting the Secure Encrypted
Virtualization (SEV) feature.
The mask is applied when page tables entriees are created or modified.
CC: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
This patch sets the normal OS buffer EfiLoaderCode/Data,
EfiBootServicesCode/Data, EfiConventionalMemory, EfiACPIReclaimMemory
to be not present after SmmReadyToLock.
To access these region in OS runtime phase is not a good solution.
Previously, we did similar check in SmmMemLib to help SMI handler
do the check. But if SMI handler forgets the check, it can still
access these OS region and bring risk.
So here we enforce the policy to prevent it happening.
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=277
Remove dependency on layout of PROCESSOR_SMM_DESCRIPTOR
everywhere possible. The only exception is the standard
SMI entry handler template that is included with the
PiSmmCpuDxeSmm module. This allows an instance of the
SmmCpuFeaturesLib to provide alternate
PROCESSOR_SMM_DESCRIPTOR structure layouts.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
"UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make
PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not
take this into account everywhere. As soon as a platform turns the PCD
into a dynamic one, at least S3 fails. When the PCD is dynamic, all
PcdGet() calls translate into PCD DXE protocol calls, which are only
permitted at boot time, not at runtime or during S3 resume.
We already have a variable called mMaxNumberOfCpus; it is initialized in
the entry point function like this:
> //
> // If support CPU hot plug, we need to allocate resources for possibly
> // hot-added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
> mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber);
> } else {
> mMaxNumberOfCpus = mNumberOfCpus;
> }
There's another use of the PCD a bit higher up, also in the entry point
function:
> //
> // Use MP Services Protocol to retrieve the number of processors and
> // number of enabled processors
> //
> Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus,
> &NumberOfEnabledProcessors);
> ASSERT_EFI_ERROR (Status);
> ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber));
Preserve these calls in the entry point function, and replace all other
uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with
mMaxNumberOfCpus.
For PcdCpuHotPlugSupport==TRUE, this is an unobservable change.
For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer
allocate resources needlessly for CPUs that can never appear in the
system.
PcdCpuMaxLogicalProcessorNumber is also retrieved in
"UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in
the library instance constructor, which runs even before the entry point
function is called.
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
PiSmmCpuDxeSmm consumes SmmAttributesTable and setup page table:
1) Code region is marked as read-only and Data region is non-executable,
if the PE image is 4K aligned.
2) Important data structure is set to RO, such as GDT/IDT.
3) SmmSaveState is set to non-executable,
and SmmEntrypoint is set to read-only.
4) If static page is supported, page table is read-only.
We use page table to protect other components, and itself.
If we use dynamic paging, we can still provide *partial* protection.
And hope page table is not modified by other components.
The XD enabling code is moved to SmiEntry to let NX take effect.
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Zeng Star <star.zeng@intel.com>