2015-10-19 21:12:53 +02:00
|
|
|
/** @file
|
|
|
|
Enable SMM profile.
|
|
|
|
|
UefiCpuPkg/PiSmmCpu: Add Shadow Stack Support for X86 SMM.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1521
We scan the SMM code with ROPgadget.
http://shell-storm.org/project/ROPgadget/
https://github.com/JonathanSalwan/ROPgadget/tree/master
This tool reports the gadget in SMM driver.
This patch enabled CET ShadowStack for X86 SMM.
If CET is supported, SMM will enable CET ShadowStack.
SMM CET will save the OS CET context at SmmEntry and
restore OS CET context at SmmExit.
Test:
1) test Intel internal platform (x64 only, CET enabled/disabled)
Boot test:
CET supported or not supported CPU
on CET supported platform
CET enabled/disabled
PcdCpuSmmCetEnable enabled/disabled
Single core/Multiple core
PcdCpuSmmStackGuard enabled/disabled
PcdCpuSmmProfileEnable enabled/disabled
PcdCpuSmmStaticPageTable enabled/disabled
CET exception test:
#CF generated with PcdCpuSmmStackGuard enabled/disabled.
Other exception test:
#PF for normal stack overflow
#PF for NX protection
#PF for RO protection
CET env test:
Launch SMM in CET enabled/disabled environment (DXE) - no impact to DXE
The test case can be found at
https://github.com/jyao1/SecurityEx/tree/master/ControlFlowPkg
2) test ovmf (both IA32 and X64 SMM, CET disabled only)
test OvmfIa32/Ovmf3264, with -D SMM_REQUIRE.
qemu-system-x86_64.exe -machine q35,smm=on -smp 4
-serial file:serial.log
-drive if=pflash,format=raw,unit=0,file=OVMF_CODE.fd,readonly=on
-drive if=pflash,format=raw,unit=1,file=OVMF_VARS.fd
QEMU emulator version 3.1.0 (v3.1.0-11736-g7a30e7adb0-dirty)
3) not tested
IA32 CET enabled platform
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yao Jiewen <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
2019-02-22 14:30:36 +01:00
|
|
|
Copyright (c) 2012 - 2019, Intel Corporation. All rights reserved.<BR>
|
2020-06-22 15:18:25 +02:00
|
|
|
Copyright (c) 2017 - 2020, AMD Incorporated. All rights reserved.<BR>
|
2017-02-26 18:43:07 +01:00
|
|
|
|
2019-04-04 01:07:22 +02:00
|
|
|
SPDX-License-Identifier: BSD-2-Clause-Patent
|
2015-10-19 21:12:53 +02:00
|
|
|
|
|
|
|
**/
|
|
|
|
|
|
|
|
#include "PiSmmCpuDxeSmm.h"
|
|
|
|
#include "SmmProfileInternal.h"
|
|
|
|
|
|
|
|
UINT32 mSmmProfileCr3;
|
|
|
|
|
|
|
|
SMM_PROFILE_HEADER *mSmmProfileBase;
|
|
|
|
MSR_DS_AREA_STRUCT *mMsrDsAreaBase;
|
|
|
|
//
|
|
|
|
// The buffer to store SMM profile data.
|
|
|
|
//
|
|
|
|
UINTN mSmmProfileSize;
|
|
|
|
|
|
|
|
//
|
|
|
|
// The buffer to enable branch trace store.
|
|
|
|
//
|
|
|
|
UINTN mMsrDsAreaSize = SMM_PROFILE_DTS_SIZE;
|
|
|
|
|
UefiCpuPkg/PiSmmCpuDxeSmm: patch "XdSupported" with PatchInstructionX86()
"mXdSupported" is a global BOOLEAN variable, initialized to TRUE. The
CheckFeatureSupported() function is executed on all processors (not
concurrently though), called from SmmInitHandler(). If XD support is found
to be missing on any CPU, then "mXdSupported" is set to FALSE, and further
processors omit the check. Afterwards, "mXdSupported" is read by several
assembly and C code locations.
The tricky part is *where* "mXdSupported" is allocated (defined):
- Before commit 717fb60443fb ("UefiCpuPkg/PiSmmCpuDxeSmm: Add paging
protection.", 2016-11-17), it used to be a normal global variable,
defined (allocated) in "SmmProfile.c".
- With said commit, we moved the definition (allocation) of "mXdSupported"
into "SmiEntry.nasm". The variable was defined over the last byte of a
"mov al, 1" instruction, so that setting it to FALSE in
CheckFeatureSupported() would patch the instruction to "mov al, 0". The
subsequent conditional jump would change behavior, plus all further read
references to "mXdSupported" (in C and assembly code) would read back
the source (imm8) operand of the patched MOV instruction as data.
This trick required that the MOV instruction be encoded with DB.
In order to get rid of the DB, we have to split both roles: we need a
label for the code patching, and "mXdSupported" has to be defined
(allocated) independently of the code patching. Of course, their values
must always remain in sync.
(1) Reinstate the "mXdSupported" definition and initialization in
"SmmProfile.c" from before commit 717fb60443fb. Change the assembly
language definition ("global") to a declaration ("extern").
(2) Define the "gPatchXdSupported" label (type X86_ASSEMBLY_PATCH_LABEL)
in "SmiEntry.nasm", and add the C-language declaration to
"SmmProfileInternal.h". Replace the DB with the MOV mnemonic (keeping
the imm8 source operand with value 1).
(3) In CheckFeatureSupported(), whenever "mXdSupported" is set to FALSE,
patch the assembly code in sync, with PatchInstructionX86().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 00:17:13 +01:00
|
|
|
//
|
|
|
|
// The flag indicates if execute-disable is supported by processor.
|
|
|
|
//
|
|
|
|
BOOLEAN mXdSupported = TRUE;
|
|
|
|
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
|
|
|
// The flag indicates if execute-disable is enabled on processor.
|
|
|
|
//
|
|
|
|
BOOLEAN mXdEnabled = FALSE;
|
|
|
|
|
|
|
|
//
|
|
|
|
// The flag indicates if BTS is supported by processor.
|
|
|
|
//
|
2016-07-02 05:55:58 +02:00
|
|
|
BOOLEAN mBtsSupported = TRUE;
|
2015-10-19 21:12:53 +02:00
|
|
|
|
|
|
|
//
|
|
|
|
// The flag indicates if SMM profile starts to record data.
|
|
|
|
//
|
|
|
|
BOOLEAN mSmmProfileStart = FALSE;
|
|
|
|
|
2018-08-20 05:35:58 +02:00
|
|
|
//
|
|
|
|
// The flag indicates if #DB will be setup in #PF handler.
|
|
|
|
//
|
|
|
|
BOOLEAN mSetupDebugTrap = FALSE;
|
|
|
|
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
|
|
|
// Record the page fault exception count for one instruction execution.
|
|
|
|
//
|
|
|
|
UINTN *mPFEntryCount;
|
|
|
|
|
|
|
|
UINT64 (*mLastPFEntryValue)[MAX_PF_ENTRY_COUNT];
|
|
|
|
UINT64 *(*mLastPFEntryPointer)[MAX_PF_ENTRY_COUNT];
|
|
|
|
|
|
|
|
MSR_DS_AREA_STRUCT **mMsrDsArea;
|
|
|
|
BRANCH_TRACE_RECORD **mMsrBTSRecord;
|
|
|
|
UINTN mBTSRecordNumber;
|
|
|
|
PEBS_RECORD **mMsrPEBSRecord;
|
|
|
|
|
|
|
|
//
|
|
|
|
// These memory ranges are always present, they does not generate the access type of page fault exception,
|
|
|
|
// but they possibly generate instruction fetch type of page fault exception.
|
|
|
|
//
|
|
|
|
MEMORY_PROTECTION_RANGE *mProtectionMemRange = NULL;
|
|
|
|
UINTN mProtectionMemRangeCount = 0;
|
|
|
|
|
|
|
|
//
|
|
|
|
// Some predefined memory ranges.
|
|
|
|
//
|
|
|
|
MEMORY_PROTECTION_RANGE mProtectionMemRangeTemplate[] = {
|
|
|
|
//
|
|
|
|
// SMRAM range (to be fixed in runtime).
|
|
|
|
// It is always present and instruction fetches are allowed.
|
|
|
|
//
|
|
|
|
{{0x00000000, 0x00000000},TRUE,FALSE},
|
|
|
|
|
|
|
|
//
|
|
|
|
// SMM profile data range( to be fixed in runtime).
|
|
|
|
// It is always present and instruction fetches are not allowed.
|
|
|
|
//
|
|
|
|
{{0x00000000, 0x00000000},TRUE,TRUE},
|
|
|
|
|
2017-03-28 08:01:24 +02:00
|
|
|
//
|
|
|
|
// SMRAM ranges not covered by mCpuHotPlugData.SmrrBase/mCpuHotPlugData.SmrrSiz (to be fixed in runtime).
|
|
|
|
// It is always present and instruction fetches are allowed.
|
|
|
|
// {{0x00000000, 0x00000000},TRUE,FALSE},
|
|
|
|
//
|
|
|
|
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
|
|
|
// Future extended range could be added here.
|
|
|
|
//
|
|
|
|
|
|
|
|
//
|
|
|
|
// PCI MMIO ranges (to be added in runtime).
|
|
|
|
// They are always present and instruction fetches are not allowed.
|
|
|
|
//
|
|
|
|
};
|
|
|
|
|
|
|
|
//
|
|
|
|
// These memory ranges are mapped by 4KB-page instead of 2MB-page.
|
|
|
|
//
|
|
|
|
MEMORY_RANGE *mSplitMemRange = NULL;
|
|
|
|
UINTN mSplitMemRangeCount = 0;
|
|
|
|
|
|
|
|
//
|
|
|
|
// SMI command port.
|
|
|
|
//
|
|
|
|
UINT32 mSmiCommandPort;
|
|
|
|
|
|
|
|
/**
|
|
|
|
Disable branch trace store.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
DisableBTS (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
|
|
|
AsmMsrAnd64 (MSR_DEBUG_CTL, ~((UINT64)(MSR_DEBUG_CTL_BTS | MSR_DEBUG_CTL_TR)));
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Enable branch trace store.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
EnableBTS (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
|
|
|
AsmMsrOr64 (MSR_DEBUG_CTL, (MSR_DEBUG_CTL_BTS | MSR_DEBUG_CTL_TR));
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Get CPU Index from APIC ID.
|
|
|
|
|
|
|
|
**/
|
|
|
|
UINTN
|
|
|
|
GetCpuIndex (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINTN Index;
|
|
|
|
UINT32 ApicId;
|
|
|
|
|
|
|
|
ApicId = GetApicId ();
|
|
|
|
|
UefiCpuPkg/PiSmmCpuDxeSmm: handle dynamic PcdCpuMaxLogicalProcessorNumber
"UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make
PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not
take this into account everywhere. As soon as a platform turns the PCD
into a dynamic one, at least S3 fails. When the PCD is dynamic, all
PcdGet() calls translate into PCD DXE protocol calls, which are only
permitted at boot time, not at runtime or during S3 resume.
We already have a variable called mMaxNumberOfCpus; it is initialized in
the entry point function like this:
> //
> // If support CPU hot plug, we need to allocate resources for possibly
> // hot-added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
> mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber);
> } else {
> mMaxNumberOfCpus = mNumberOfCpus;
> }
There's another use of the PCD a bit higher up, also in the entry point
function:
> //
> // Use MP Services Protocol to retrieve the number of processors and
> // number of enabled processors
> //
> Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus,
> &NumberOfEnabledProcessors);
> ASSERT_EFI_ERROR (Status);
> ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber));
Preserve these calls in the entry point function, and replace all other
uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with
mMaxNumberOfCpus.
For PcdCpuHotPlugSupport==TRUE, this is an unobservable change.
For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer
allocate resources needlessly for CPUs that can never appear in the
system.
PcdCpuMaxLogicalProcessorNumber is also retrieved in
"UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in
the library instance constructor, which runs even before the entry point
function is called.
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-11-24 20:49:43 +01:00
|
|
|
for (Index = 0; Index < mMaxNumberOfCpus; Index++) {
|
2015-10-19 21:12:53 +02:00
|
|
|
if (gSmmCpuPrivate->ProcessorInfo[Index].ProcessorId == ApicId) {
|
|
|
|
return Index;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
ASSERT (FALSE);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Get the source of IP after execute-disable exception is triggered.
|
|
|
|
|
|
|
|
@param CpuIndex The index of CPU.
|
|
|
|
@param DestinationIP The destination address.
|
|
|
|
|
|
|
|
**/
|
|
|
|
UINT64
|
|
|
|
GetSourceFromDestinationOnBts (
|
|
|
|
UINTN CpuIndex,
|
|
|
|
UINT64 DestinationIP
|
|
|
|
)
|
|
|
|
{
|
|
|
|
BRANCH_TRACE_RECORD *CurrentBTSRecord;
|
|
|
|
UINTN Index;
|
|
|
|
BOOLEAN FirstMatch;
|
|
|
|
|
|
|
|
FirstMatch = FALSE;
|
|
|
|
|
|
|
|
CurrentBTSRecord = (BRANCH_TRACE_RECORD *)mMsrDsArea[CpuIndex]->BTSIndex;
|
|
|
|
for (Index = 0; Index < mBTSRecordNumber; Index++) {
|
|
|
|
if ((UINTN)CurrentBTSRecord < (UINTN)mMsrBTSRecord[CpuIndex]) {
|
|
|
|
//
|
|
|
|
// Underflow
|
|
|
|
//
|
|
|
|
CurrentBTSRecord = (BRANCH_TRACE_RECORD *)((UINTN)mMsrDsArea[CpuIndex]->BTSAbsoluteMaximum - 1);
|
|
|
|
CurrentBTSRecord --;
|
|
|
|
}
|
|
|
|
if (CurrentBTSRecord->LastBranchTo == DestinationIP) {
|
|
|
|
//
|
|
|
|
// Good! find 1st one, then find 2nd one.
|
|
|
|
//
|
|
|
|
if (!FirstMatch) {
|
|
|
|
//
|
|
|
|
// The first one is DEBUG exception
|
|
|
|
//
|
|
|
|
FirstMatch = TRUE;
|
|
|
|
} else {
|
|
|
|
//
|
|
|
|
// Good find proper one.
|
|
|
|
//
|
|
|
|
return CurrentBTSRecord->LastBranchFrom;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
CurrentBTSRecord--;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
SMM profile specific INT 1 (single-step) exception handler.
|
|
|
|
|
|
|
|
@param InterruptType Defines the type of interrupt or exception that
|
|
|
|
occurred on the processor.This parameter is processor architecture specific.
|
|
|
|
@param SystemContext A pointer to the processor context when
|
|
|
|
the interrupt occurred on the processor.
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
EFIAPI
|
|
|
|
DebugExceptionHandler (
|
|
|
|
IN EFI_EXCEPTION_TYPE InterruptType,
|
|
|
|
IN EFI_SYSTEM_CONTEXT SystemContext
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINTN CpuIndex;
|
|
|
|
UINTN PFEntry;
|
|
|
|
|
2018-08-20 05:35:58 +02:00
|
|
|
if (!mSmmProfileStart &&
|
|
|
|
!HEAP_GUARD_NONSTOP_MODE &&
|
|
|
|
!NULL_DETECTION_NONSTOP_MODE) {
|
2015-10-19 21:12:53 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
CpuIndex = GetCpuIndex ();
|
|
|
|
|
|
|
|
//
|
|
|
|
// Clear last PF entries
|
|
|
|
//
|
|
|
|
for (PFEntry = 0; PFEntry < mPFEntryCount[CpuIndex]; PFEntry++) {
|
|
|
|
*mLastPFEntryPointer[CpuIndex][PFEntry] = mLastPFEntryValue[CpuIndex][PFEntry];
|
|
|
|
}
|
|
|
|
|
|
|
|
//
|
|
|
|
// Reset page fault exception count for next page fault.
|
|
|
|
//
|
|
|
|
mPFEntryCount[CpuIndex] = 0;
|
|
|
|
|
|
|
|
//
|
|
|
|
// Flush TLB
|
|
|
|
//
|
|
|
|
CpuFlushTlb ();
|
|
|
|
|
|
|
|
//
|
|
|
|
// Clear TF in EFLAGS
|
|
|
|
//
|
|
|
|
ClearTrapFlag (SystemContext);
|
|
|
|
}
|
|
|
|
|
2017-03-28 07:51:52 +02:00
|
|
|
/**
|
|
|
|
Check if the input address is in SMM ranges.
|
|
|
|
|
|
|
|
@param[in] Address The input address.
|
|
|
|
|
|
|
|
@retval TRUE The input address is in SMM.
|
|
|
|
@retval FALSE The input address is not in SMM.
|
|
|
|
**/
|
|
|
|
BOOLEAN
|
|
|
|
IsInSmmRanges (
|
|
|
|
IN EFI_PHYSICAL_ADDRESS Address
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINTN Index;
|
|
|
|
|
2017-05-11 09:01:39 +02:00
|
|
|
if ((Address >= mCpuHotPlugData.SmrrBase) && (Address < mCpuHotPlugData.SmrrBase + mCpuHotPlugData.SmrrSize)) {
|
2017-03-28 07:51:52 +02:00
|
|
|
return TRUE;
|
|
|
|
}
|
|
|
|
for (Index = 0; Index < mSmmCpuSmramRangeCount; Index++) {
|
|
|
|
if (Address >= mSmmCpuSmramRanges[Index].CpuStart &&
|
|
|
|
Address < mSmmCpuSmramRanges[Index].CpuStart + mSmmCpuSmramRanges[Index].PhysicalSize) {
|
|
|
|
return TRUE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return FALSE;
|
|
|
|
}
|
|
|
|
|
2015-10-19 21:12:53 +02:00
|
|
|
/**
|
|
|
|
Check if the memory address will be mapped by 4KB-page.
|
|
|
|
|
|
|
|
@param Address The address of Memory.
|
|
|
|
@param Nx The flag indicates if the memory is execute-disable.
|
|
|
|
|
|
|
|
**/
|
|
|
|
BOOLEAN
|
|
|
|
IsAddressValid (
|
|
|
|
IN EFI_PHYSICAL_ADDRESS Address,
|
|
|
|
IN BOOLEAN *Nx
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINTN Index;
|
|
|
|
|
|
|
|
if (FeaturePcdGet (PcdCpuSmmProfileEnable)) {
|
|
|
|
//
|
|
|
|
// Check configuration
|
|
|
|
//
|
|
|
|
for (Index = 0; Index < mProtectionMemRangeCount; Index++) {
|
|
|
|
if ((Address >= mProtectionMemRange[Index].Range.Base) && (Address < mProtectionMemRange[Index].Range.Top)) {
|
|
|
|
*Nx = mProtectionMemRange[Index].Nx;
|
|
|
|
return mProtectionMemRange[Index].Present;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
*Nx = TRUE;
|
|
|
|
return FALSE;
|
|
|
|
|
|
|
|
} else {
|
2017-03-28 07:51:52 +02:00
|
|
|
*Nx = TRUE;
|
|
|
|
if (IsInSmmRanges (Address)) {
|
|
|
|
*Nx = FALSE;
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|
|
|
|
return TRUE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Check if the memory address will be mapped by 4KB-page.
|
|
|
|
|
|
|
|
@param Address The address of Memory.
|
|
|
|
|
|
|
|
**/
|
|
|
|
BOOLEAN
|
|
|
|
IsAddressSplit (
|
|
|
|
IN EFI_PHYSICAL_ADDRESS Address
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINTN Index;
|
|
|
|
|
|
|
|
if (FeaturePcdGet (PcdCpuSmmProfileEnable)) {
|
|
|
|
//
|
|
|
|
// Check configuration
|
|
|
|
//
|
|
|
|
for (Index = 0; Index < mSplitMemRangeCount; Index++) {
|
|
|
|
if ((Address >= mSplitMemRange[Index].Base) && (Address < mSplitMemRange[Index].Top)) {
|
|
|
|
return TRUE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (Address < mCpuHotPlugData.SmrrBase) {
|
|
|
|
if ((mCpuHotPlugData.SmrrBase - Address) < BASE_2MB) {
|
|
|
|
return TRUE;
|
|
|
|
}
|
|
|
|
} else if (Address > (mCpuHotPlugData.SmrrBase + mCpuHotPlugData.SmrrSize - BASE_2MB)) {
|
|
|
|
if ((Address - (mCpuHotPlugData.SmrrBase + mCpuHotPlugData.SmrrSize - BASE_2MB)) < BASE_2MB) {
|
|
|
|
return TRUE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
//
|
|
|
|
// Return default
|
|
|
|
//
|
|
|
|
return FALSE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Initialize the protected memory ranges and the 4KB-page mapped memory ranges.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
InitProtectedMemRange (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINTN Index;
|
|
|
|
UINTN NumberOfDescriptors;
|
2017-03-28 08:01:24 +02:00
|
|
|
UINTN NumberOfAddedDescriptors;
|
2015-10-19 21:12:53 +02:00
|
|
|
UINTN NumberOfProtectRange;
|
|
|
|
UINTN NumberOfSpliteRange;
|
|
|
|
EFI_GCD_MEMORY_SPACE_DESCRIPTOR *MemorySpaceMap;
|
|
|
|
UINTN TotalSize;
|
|
|
|
EFI_PHYSICAL_ADDRESS ProtectBaseAddress;
|
|
|
|
EFI_PHYSICAL_ADDRESS ProtectEndAddress;
|
|
|
|
EFI_PHYSICAL_ADDRESS Top2MBAlignedAddress;
|
|
|
|
EFI_PHYSICAL_ADDRESS Base2MBAlignedAddress;
|
|
|
|
UINT64 High4KBPageSize;
|
|
|
|
UINT64 Low4KBPageSize;
|
|
|
|
|
|
|
|
NumberOfDescriptors = 0;
|
2017-03-28 08:01:24 +02:00
|
|
|
NumberOfAddedDescriptors = mSmmCpuSmramRangeCount;
|
2015-10-19 21:12:53 +02:00
|
|
|
NumberOfSpliteRange = 0;
|
|
|
|
MemorySpaceMap = NULL;
|
|
|
|
|
|
|
|
//
|
|
|
|
// Get MMIO ranges from GCD and add them into protected memory ranges.
|
|
|
|
//
|
2016-03-18 20:56:04 +01:00
|
|
|
gDS->GetMemorySpaceMap (
|
|
|
|
&NumberOfDescriptors,
|
|
|
|
&MemorySpaceMap
|
|
|
|
);
|
2015-10-19 21:12:53 +02:00
|
|
|
for (Index = 0; Index < NumberOfDescriptors; Index++) {
|
|
|
|
if (MemorySpaceMap[Index].GcdMemoryType == EfiGcdMemoryTypeMemoryMappedIo) {
|
2017-03-28 08:01:24 +02:00
|
|
|
NumberOfAddedDescriptors++;
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-03-28 08:01:24 +02:00
|
|
|
if (NumberOfAddedDescriptors != 0) {
|
|
|
|
TotalSize = NumberOfAddedDescriptors * sizeof (MEMORY_PROTECTION_RANGE) + sizeof (mProtectionMemRangeTemplate);
|
2015-10-19 21:12:53 +02:00
|
|
|
mProtectionMemRange = (MEMORY_PROTECTION_RANGE *) AllocateZeroPool (TotalSize);
|
|
|
|
ASSERT (mProtectionMemRange != NULL);
|
|
|
|
mProtectionMemRangeCount = TotalSize / sizeof (MEMORY_PROTECTION_RANGE);
|
|
|
|
|
|
|
|
//
|
|
|
|
// Copy existing ranges.
|
|
|
|
//
|
|
|
|
CopyMem (mProtectionMemRange, mProtectionMemRangeTemplate, sizeof (mProtectionMemRangeTemplate));
|
|
|
|
|
|
|
|
//
|
|
|
|
// Create split ranges which come from protected ranges.
|
|
|
|
//
|
|
|
|
TotalSize = (TotalSize / sizeof (MEMORY_PROTECTION_RANGE)) * sizeof (MEMORY_RANGE);
|
|
|
|
mSplitMemRange = (MEMORY_RANGE *) AllocateZeroPool (TotalSize);
|
|
|
|
ASSERT (mSplitMemRange != NULL);
|
|
|
|
|
2017-03-28 08:01:24 +02:00
|
|
|
//
|
|
|
|
// Create SMM ranges which are set to present and execution-enable.
|
|
|
|
//
|
|
|
|
NumberOfProtectRange = sizeof (mProtectionMemRangeTemplate) / sizeof (MEMORY_PROTECTION_RANGE);
|
|
|
|
for (Index = 0; Index < mSmmCpuSmramRangeCount; Index++) {
|
|
|
|
if (mSmmCpuSmramRanges[Index].CpuStart >= mProtectionMemRange[0].Range.Base &&
|
|
|
|
mSmmCpuSmramRanges[Index].CpuStart + mSmmCpuSmramRanges[Index].PhysicalSize < mProtectionMemRange[0].Range.Top) {
|
|
|
|
//
|
|
|
|
// If the address have been already covered by mCpuHotPlugData.SmrrBase/mCpuHotPlugData.SmrrSiz
|
|
|
|
//
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
mProtectionMemRange[NumberOfProtectRange].Range.Base = mSmmCpuSmramRanges[Index].CpuStart;
|
|
|
|
mProtectionMemRange[NumberOfProtectRange].Range.Top = mSmmCpuSmramRanges[Index].CpuStart + mSmmCpuSmramRanges[Index].PhysicalSize;
|
|
|
|
mProtectionMemRange[NumberOfProtectRange].Present = TRUE;
|
|
|
|
mProtectionMemRange[NumberOfProtectRange].Nx = FALSE;
|
|
|
|
NumberOfProtectRange++;
|
|
|
|
}
|
|
|
|
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
|
|
|
// Create MMIO ranges which are set to present and execution-disable.
|
|
|
|
//
|
|
|
|
for (Index = 0; Index < NumberOfDescriptors; Index++) {
|
|
|
|
if (MemorySpaceMap[Index].GcdMemoryType != EfiGcdMemoryTypeMemoryMappedIo) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
mProtectionMemRange[NumberOfProtectRange].Range.Base = MemorySpaceMap[Index].BaseAddress;
|
|
|
|
mProtectionMemRange[NumberOfProtectRange].Range.Top = MemorySpaceMap[Index].BaseAddress + MemorySpaceMap[Index].Length;
|
|
|
|
mProtectionMemRange[NumberOfProtectRange].Present = TRUE;
|
|
|
|
mProtectionMemRange[NumberOfProtectRange].Nx = TRUE;
|
|
|
|
NumberOfProtectRange++;
|
|
|
|
}
|
2017-03-28 08:01:24 +02:00
|
|
|
|
|
|
|
//
|
|
|
|
// Check and updated actual protected memory ranges count
|
|
|
|
//
|
|
|
|
ASSERT (NumberOfProtectRange <= mProtectionMemRangeCount);
|
|
|
|
mProtectionMemRangeCount = NumberOfProtectRange;
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
//
|
|
|
|
// According to protected ranges, create the ranges which will be mapped by 2KB page.
|
|
|
|
//
|
|
|
|
NumberOfSpliteRange = 0;
|
|
|
|
NumberOfProtectRange = mProtectionMemRangeCount;
|
|
|
|
for (Index = 0; Index < NumberOfProtectRange; Index++) {
|
|
|
|
//
|
|
|
|
// If MMIO base address is not 2MB alignment, make 2MB alignment for create 4KB page in page table.
|
|
|
|
//
|
|
|
|
ProtectBaseAddress = mProtectionMemRange[Index].Range.Base;
|
|
|
|
ProtectEndAddress = mProtectionMemRange[Index].Range.Top;
|
|
|
|
if (((ProtectBaseAddress & (SIZE_2MB - 1)) != 0) || ((ProtectEndAddress & (SIZE_2MB - 1)) != 0)) {
|
|
|
|
//
|
|
|
|
// Check if it is possible to create 4KB-page for not 2MB-aligned range and to create 2MB-page for 2MB-aligned range.
|
|
|
|
// A mix of 4KB and 2MB page could save SMRAM space.
|
|
|
|
//
|
|
|
|
Top2MBAlignedAddress = ProtectEndAddress & ~(SIZE_2MB - 1);
|
|
|
|
Base2MBAlignedAddress = (ProtectBaseAddress + SIZE_2MB - 1) & ~(SIZE_2MB - 1);
|
|
|
|
if ((Top2MBAlignedAddress > Base2MBAlignedAddress) &&
|
|
|
|
((Top2MBAlignedAddress - Base2MBAlignedAddress) >= SIZE_2MB)) {
|
|
|
|
//
|
|
|
|
// There is an range which could be mapped by 2MB-page.
|
|
|
|
//
|
|
|
|
High4KBPageSize = ((ProtectEndAddress + SIZE_2MB - 1) & ~(SIZE_2MB - 1)) - (ProtectEndAddress & ~(SIZE_2MB - 1));
|
|
|
|
Low4KBPageSize = ((ProtectBaseAddress + SIZE_2MB - 1) & ~(SIZE_2MB - 1)) - (ProtectBaseAddress & ~(SIZE_2MB - 1));
|
|
|
|
if (High4KBPageSize != 0) {
|
|
|
|
//
|
|
|
|
// Add not 2MB-aligned range to be mapped by 4KB-page.
|
|
|
|
//
|
|
|
|
mSplitMemRange[NumberOfSpliteRange].Base = ProtectEndAddress & ~(SIZE_2MB - 1);
|
|
|
|
mSplitMemRange[NumberOfSpliteRange].Top = (ProtectEndAddress + SIZE_2MB - 1) & ~(SIZE_2MB - 1);
|
|
|
|
NumberOfSpliteRange++;
|
|
|
|
}
|
|
|
|
if (Low4KBPageSize != 0) {
|
|
|
|
//
|
|
|
|
// Add not 2MB-aligned range to be mapped by 4KB-page.
|
|
|
|
//
|
|
|
|
mSplitMemRange[NumberOfSpliteRange].Base = ProtectBaseAddress & ~(SIZE_2MB - 1);
|
|
|
|
mSplitMemRange[NumberOfSpliteRange].Top = (ProtectBaseAddress + SIZE_2MB - 1) & ~(SIZE_2MB - 1);
|
|
|
|
NumberOfSpliteRange++;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
//
|
|
|
|
// The range could only be mapped by 4KB-page.
|
|
|
|
//
|
|
|
|
mSplitMemRange[NumberOfSpliteRange].Base = ProtectBaseAddress & ~(SIZE_2MB - 1);
|
|
|
|
mSplitMemRange[NumberOfSpliteRange].Top = (ProtectEndAddress + SIZE_2MB - 1) & ~(SIZE_2MB - 1);
|
|
|
|
NumberOfSpliteRange++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
mSplitMemRangeCount = NumberOfSpliteRange;
|
|
|
|
|
|
|
|
DEBUG ((EFI_D_INFO, "SMM Profile Memory Ranges:\n"));
|
|
|
|
for (Index = 0; Index < mProtectionMemRangeCount; Index++) {
|
|
|
|
DEBUG ((EFI_D_INFO, "mProtectionMemRange[%d].Base = %lx\n", Index, mProtectionMemRange[Index].Range.Base));
|
|
|
|
DEBUG ((EFI_D_INFO, "mProtectionMemRange[%d].Top = %lx\n", Index, mProtectionMemRange[Index].Range.Top));
|
|
|
|
}
|
|
|
|
for (Index = 0; Index < mSplitMemRangeCount; Index++) {
|
|
|
|
DEBUG ((EFI_D_INFO, "mSplitMemRange[%d].Base = %lx\n", Index, mSplitMemRange[Index].Base));
|
|
|
|
DEBUG ((EFI_D_INFO, "mSplitMemRange[%d].Top = %lx\n", Index, mSplitMemRange[Index].Top));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Update page table according to protected memory ranges and the 4KB-page mapped memory ranges.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
InitPaging (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
2019-06-12 11:26:45 +02:00
|
|
|
UINT64 Pml5Entry;
|
|
|
|
UINT64 Pml4Entry;
|
|
|
|
UINT64 *Pml5;
|
2015-10-19 21:12:53 +02:00
|
|
|
UINT64 *Pml4;
|
2019-06-12 04:14:42 +02:00
|
|
|
UINT64 *Pdpt;
|
|
|
|
UINT64 *Pd;
|
2015-10-19 21:12:53 +02:00
|
|
|
UINT64 *Pt;
|
|
|
|
UINTN Address;
|
2019-06-12 11:26:45 +02:00
|
|
|
UINTN Pml5Index;
|
2019-06-12 04:14:42 +02:00
|
|
|
UINTN Pml4Index;
|
|
|
|
UINTN PdptIndex;
|
|
|
|
UINTN PdIndex;
|
|
|
|
UINTN PtIndex;
|
|
|
|
UINTN NumberOfPdptEntries;
|
2015-10-19 21:12:53 +02:00
|
|
|
UINTN NumberOfPml4Entries;
|
2019-06-12 11:26:45 +02:00
|
|
|
UINTN NumberOfPml5Entries;
|
2015-10-19 21:12:53 +02:00
|
|
|
UINTN SizeOfMemorySpace;
|
|
|
|
BOOLEAN Nx;
|
2019-06-12 11:26:45 +02:00
|
|
|
IA32_CR4 Cr4;
|
|
|
|
BOOLEAN Enable5LevelPaging;
|
|
|
|
|
|
|
|
Cr4.UintN = AsmReadCr4 ();
|
|
|
|
Enable5LevelPaging = (BOOLEAN) (Cr4.Bits.LA57 == 1);
|
2015-10-19 21:12:53 +02:00
|
|
|
|
|
|
|
if (sizeof (UINTN) == sizeof (UINT64)) {
|
2019-06-12 11:26:45 +02:00
|
|
|
if (!Enable5LevelPaging) {
|
|
|
|
Pml5Entry = (UINTN) mSmmProfileCr3 | IA32_PG_P;
|
|
|
|
Pml5 = &Pml5Entry;
|
|
|
|
} else {
|
|
|
|
Pml5 = (UINT64*) (UINTN) mSmmProfileCr3;
|
|
|
|
}
|
2015-10-19 21:12:53 +02:00
|
|
|
SizeOfMemorySpace = HighBitSet64 (gPhyMask) + 1;
|
|
|
|
//
|
|
|
|
// Calculate the table entries of PML4E and PDPTE.
|
|
|
|
//
|
2019-06-12 11:26:45 +02:00
|
|
|
NumberOfPml5Entries = 1;
|
|
|
|
if (SizeOfMemorySpace > 48) {
|
|
|
|
NumberOfPml5Entries = (UINTN) LShiftU64 (1, SizeOfMemorySpace - 48);
|
|
|
|
SizeOfMemorySpace = 48;
|
2019-06-12 11:26:45 +02:00
|
|
|
}
|
2019-06-12 11:26:45 +02:00
|
|
|
|
2019-07-12 08:59:32 +02:00
|
|
|
NumberOfPml4Entries = 1;
|
2019-06-12 11:26:45 +02:00
|
|
|
if (SizeOfMemorySpace > 39) {
|
|
|
|
NumberOfPml4Entries = (UINTN) LShiftU64 (1, SizeOfMemorySpace - 39);
|
|
|
|
SizeOfMemorySpace = 39;
|
|
|
|
}
|
|
|
|
|
|
|
|
NumberOfPdptEntries = 1;
|
|
|
|
ASSERT (SizeOfMemorySpace > 30);
|
|
|
|
NumberOfPdptEntries = (UINTN) LShiftU64 (1, SizeOfMemorySpace - 30);
|
|
|
|
} else {
|
|
|
|
Pml4Entry = (UINTN) mSmmProfileCr3 | IA32_PG_P;
|
|
|
|
Pml4 = &Pml4Entry;
|
|
|
|
Pml5Entry = (UINTN) Pml4 | IA32_PG_P;
|
|
|
|
Pml5 = &Pml5Entry;
|
|
|
|
NumberOfPml5Entries = 1;
|
|
|
|
NumberOfPml4Entries = 1;
|
2019-06-12 04:14:42 +02:00
|
|
|
NumberOfPdptEntries = 4;
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
//
|
|
|
|
// Go through page table and change 2MB-page into 4KB-page.
|
|
|
|
//
|
2019-06-12 11:26:45 +02:00
|
|
|
for (Pml5Index = 0; Pml5Index < NumberOfPml5Entries; Pml5Index++) {
|
|
|
|
if ((Pml5[Pml5Index] & IA32_PG_P) == 0) {
|
|
|
|
//
|
|
|
|
// If PML5 entry does not exist, skip it
|
|
|
|
//
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
Pml4 = (UINT64 *) (UINTN) (Pml5[Pml5Index] & PHYSICAL_ADDRESS_MASK);
|
|
|
|
for (Pml4Index = 0; Pml4Index < NumberOfPml4Entries; Pml4Index++) {
|
2019-06-12 04:14:42 +02:00
|
|
|
if ((Pml4[Pml4Index] & IA32_PG_P) == 0) {
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
2019-06-12 04:14:42 +02:00
|
|
|
// If PML4 entry does not exist, skip it
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
|
|
|
continue;
|
|
|
|
}
|
2019-06-12 04:14:42 +02:00
|
|
|
Pdpt = (UINT64 *)(UINTN)(Pml4[Pml4Index] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
|
2019-06-12 11:26:45 +02:00
|
|
|
for (PdptIndex = 0; PdptIndex < NumberOfPdptEntries; PdptIndex++, Pdpt++) {
|
|
|
|
if ((*Pdpt & IA32_PG_P) == 0) {
|
|
|
|
//
|
|
|
|
// If PDPT entry does not exist, skip it
|
|
|
|
//
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if ((*Pdpt & IA32_PG_PS) != 0) {
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
2019-06-12 11:26:45 +02:00
|
|
|
// This is 1G entry, skip it
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
|
|
|
continue;
|
|
|
|
}
|
2019-06-12 11:26:45 +02:00
|
|
|
Pd = (UINT64 *)(UINTN)(*Pdpt & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
|
|
|
|
if (Pd == 0) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
for (PdIndex = 0; PdIndex < SIZE_4KB / sizeof (*Pd); PdIndex++, Pd++) {
|
|
|
|
if ((*Pd & IA32_PG_P) == 0) {
|
|
|
|
//
|
|
|
|
// If PD entry does not exist, skip it
|
|
|
|
//
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
Address = (UINTN) LShiftU64 (
|
|
|
|
LShiftU64 (
|
|
|
|
LShiftU64 ((Pml5Index << 9) + Pml4Index, 9) + PdptIndex,
|
|
|
|
9
|
|
|
|
) + PdIndex,
|
|
|
|
21
|
|
|
|
);
|
2015-10-19 21:12:53 +02:00
|
|
|
|
|
|
|
//
|
2019-06-12 11:26:45 +02:00
|
|
|
// If it is 2M page, check IsAddressSplit()
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
2019-06-12 11:26:45 +02:00
|
|
|
if (((*Pd & IA32_PG_PS) != 0) && IsAddressSplit (Address)) {
|
|
|
|
//
|
|
|
|
// Based on current page table, create 4KB page table for split area.
|
|
|
|
//
|
|
|
|
ASSERT (Address == (*Pd & PHYSICAL_ADDRESS_MASK));
|
|
|
|
|
|
|
|
Pt = AllocatePageTableMemory (1);
|
|
|
|
ASSERT (Pt != NULL);
|
2015-10-19 21:12:53 +02:00
|
|
|
|
2019-06-12 11:26:45 +02:00
|
|
|
// Split it
|
UefiCpuPkg/PiSmmCpuDxeSmm: fix 2M->4K page splitting regression for PDEs
In commit 4eee0cc7cc0d ("UefiCpuPkg/PiSmmCpu: Enable 5 level paging when
CPU supports", 2019-07-12), the Page Directory Entry setting was regressed
(corrupted) when splitting a 2MB page to 512 4KB pages, in the
InitPaging() function.
Consider the following hunk, displayed with
$ git show --function-context --ignore-space-change 4eee0cc7cc0db
> //
> // If it is 2M page, check IsAddressSplit()
> //
> if (((*Pd & IA32_PG_PS) != 0) && IsAddressSplit (Address)) {
> //
> // Based on current page table, create 4KB page table for split area.
> //
> ASSERT (Address == (*Pd & PHYSICAL_ADDRESS_MASK));
>
> Pt = AllocatePageTableMemory (1);
> ASSERT (Pt != NULL);
>
> + *Pd = (UINTN) Pt | IA32_PG_RW | IA32_PG_P;
> +
> // Split it
> - for (PtIndex = 0; PtIndex < SIZE_4KB / sizeof(*Pt); PtIndex++) {
> - Pt[PtIndex] = Address + ((PtIndex << 12) | mAddressEncMask | PAGE_ATTRIBUTE_BITS);
> + for (PtIndex = 0; PtIndex < SIZE_4KB / sizeof(*Pt); PtIndex++, Pt++) {
> + *Pt = Address + ((PtIndex << 12) | mAddressEncMask | PAGE_ATTRIBUTE_BITS);
> } // end for PT
> *Pd = (UINT64)(UINTN)Pt | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> } // end if IsAddressSplit
> } // end for PD
First, the new assignment to the Page Directory Entry (*Pd) is
superfluous. That's because (a) we set (*Pd) after the Page Table Entry
loop anyway, and (b) here we do not attempt to access the memory starting
at "Address" (which is mapped by the original value of the Page Directory
Entry).
Second, appending "Pt++" to the incrementing expression of the PTE loop is
a bug. It causes "Pt" to point *right past* the just-allocated Page Table,
once we finish the loop. But the PDE assignment that immediately follows
the loop assumes that "Pt" still points to the *start* of the new Page
Table.
The result is that the originally mapped 2MB page disappears from the
processor's view. The PDE now points to a "Page Table" that is filled with
garbage. The random entries in that "Page Table" will cause some virtual
addresses in the original 2MB area to fault. Other virtual addresses in
the same range will no longer have a 1:1 physical mapping, but be
scattered over random physical page frames.
The second phase of the InitPaging() function ("Go through page table and
set several page table entries to absent or execute-disable") already
manipulates entries in wrong Page Tables, for such PDEs that got split in
the first phase.
This issue has been caught as follows:
- OVMF is started with 2001 MB of guest RAM.
- This places the main SMRAM window at 0x7C10_1000.
- The SMRAM management in the SMM Core links this SMRAM window into
"mSmmMemoryMap", with a FREE_PAGE_LIST record placed at the start of the
area.
- At "SMM Ready To Lock" time, PiSmmCpuDxeSmm calls InitPaging(). The
first phase (quoted above) decides to split the 2MB page at 0x7C00_0000
into 512 4KB pages, and corrupts the PDE. The new Page Table is
allocated at 0x7CE0_D000, but the PDE is set to 0x7CE0_E000 (plus
attributes 0x67).
- Due to the corrupted PDE, the second phase of InitPaging() already looks
up the PTE for Address=0x7C10_1000 in the wrong place. The second phase
goes on to mark bogus PTEs as "NX".
- PiSmmCpuDxeSmm calls SetMemMapAttributes(). Address 0x7C10_1000 is at
the base of the SMRAM window, therefore it happens to be listed in the
SMRAM map as an EfiConventionalMemory region. SetMemMapAttributes()
calls SmmSetMemoryAttributes() to mark the region as XP. However,
GetPageTableEntry() in ConvertMemoryPageAttributes() fails -- address
0x7C10_1000 is no longer mapped by anything! -- and so the attribute
setting fails with RETURN_UNSUPPORTED. This error goes unnoticed, as
SetMemMapAttributes() ignores the return value of
SmmSetMemoryAttributes().
- When SetMemMapAttributes() reaches another entry in the SMRAM map,
ConvertMemoryPageAttributes() decides it needs to split a 2MB page, and
calls SplitPage().
- SplitPage() calls AllocatePageTableMemory() for the new Page Table,
which takes us to InternalAllocMaxAddress() in the SMM Core.
- The SMM core attempts to read the FREE_PAGE_LIST record at 0x7C10_1000.
Because this virtual address is no longer mapped, the firmware crashes
in InternalAllocMaxAddress(), when accessing (Pages->NumberOfPages).
Remove the useless assignment to (*Pd) from before the loop. Revert the
loop incrementing and the PTE assignment to the known good version.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1789335
Fixes: 4eee0cc7cc0db74489b99c19eba056b53eda6358
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
2020-01-09 22:00:39 +01:00
|
|
|
for (PtIndex = 0; PtIndex < SIZE_4KB / sizeof(*Pt); PtIndex++) {
|
|
|
|
Pt[PtIndex] = Address + ((PtIndex << 12) | mAddressEncMask | PAGE_ATTRIBUTE_BITS);
|
2019-06-12 11:26:45 +02:00
|
|
|
} // end for PT
|
|
|
|
*Pd = (UINT64)(UINTN)Pt | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
|
|
|
|
} // end if IsAddressSplit
|
|
|
|
} // end for PD
|
|
|
|
} // end for PDPT
|
|
|
|
} // end for PML4
|
|
|
|
} // end for PML5
|
2015-10-19 21:12:53 +02:00
|
|
|
|
|
|
|
//
|
|
|
|
// Go through page table and set several page table entries to absent or execute-disable.
|
|
|
|
//
|
|
|
|
DEBUG ((EFI_D_INFO, "Patch page table start ...\n"));
|
2019-06-12 11:26:45 +02:00
|
|
|
for (Pml5Index = 0; Pml5Index < NumberOfPml5Entries; Pml5Index++) {
|
|
|
|
if ((Pml5[Pml5Index] & IA32_PG_P) == 0) {
|
|
|
|
//
|
|
|
|
// If PML5 entry does not exist, skip it
|
|
|
|
//
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
Pml4 = (UINT64 *) (UINTN) (Pml5[Pml5Index] & PHYSICAL_ADDRESS_MASK);
|
|
|
|
for (Pml4Index = 0; Pml4Index < NumberOfPml4Entries; Pml4Index++) {
|
2019-06-12 04:14:42 +02:00
|
|
|
if ((Pml4[Pml4Index] & IA32_PG_P) == 0) {
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
2019-06-12 04:14:42 +02:00
|
|
|
// If PML4 entry does not exist, skip it
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
|
|
|
continue;
|
|
|
|
}
|
2019-06-12 04:14:42 +02:00
|
|
|
Pdpt = (UINT64 *)(UINTN)(Pml4[Pml4Index] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
|
2019-06-12 11:26:45 +02:00
|
|
|
for (PdptIndex = 0; PdptIndex < NumberOfPdptEntries; PdptIndex++, Pdpt++) {
|
|
|
|
if ((*Pdpt & IA32_PG_P) == 0) {
|
|
|
|
//
|
|
|
|
// If PDPT entry does not exist, skip it
|
|
|
|
//
|
|
|
|
continue;
|
2016-10-23 17:19:52 +02:00
|
|
|
}
|
2019-06-12 11:26:45 +02:00
|
|
|
if ((*Pdpt & IA32_PG_PS) != 0) {
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
2019-06-12 11:26:45 +02:00
|
|
|
// This is 1G entry, set NX bit and skip it
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
2019-06-12 11:26:45 +02:00
|
|
|
if (mXdSupported) {
|
|
|
|
*Pdpt = *Pdpt | IA32_PG_NX;
|
|
|
|
}
|
2015-10-19 21:12:53 +02:00
|
|
|
continue;
|
|
|
|
}
|
2019-06-12 11:26:45 +02:00
|
|
|
Pd = (UINT64 *)(UINTN)(*Pdpt & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
|
|
|
|
if (Pd == 0) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
for (PdIndex = 0; PdIndex < SIZE_4KB / sizeof (*Pd); PdIndex++, Pd++) {
|
|
|
|
if ((*Pd & IA32_PG_P) == 0) {
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
2019-06-12 11:26:45 +02:00
|
|
|
// If PD entry does not exist, skip it
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
|
|
|
continue;
|
|
|
|
}
|
2019-06-12 11:26:45 +02:00
|
|
|
Address = (UINTN) LShiftU64 (
|
|
|
|
LShiftU64 (
|
|
|
|
LShiftU64 ((Pml5Index << 9) + Pml4Index, 9) + PdptIndex,
|
|
|
|
9
|
|
|
|
) + PdIndex,
|
|
|
|
21
|
|
|
|
);
|
|
|
|
|
|
|
|
if ((*Pd & IA32_PG_PS) != 0) {
|
|
|
|
// 2MB page
|
|
|
|
|
2015-10-19 21:12:53 +02:00
|
|
|
if (!IsAddressValid (Address, &Nx)) {
|
2019-06-12 11:26:45 +02:00
|
|
|
//
|
|
|
|
// Patch to remove Present flag and RW flag
|
|
|
|
//
|
|
|
|
*Pd = *Pd & (INTN)(INT32)(~PAGE_ATTRIBUTE_BITS);
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|
|
|
|
if (Nx && mXdSupported) {
|
2019-06-12 11:26:45 +02:00
|
|
|
*Pd = *Pd | IA32_PG_NX;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
// 4KB page
|
|
|
|
Pt = (UINT64 *)(UINTN)(*Pd & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
|
|
|
|
if (Pt == 0) {
|
|
|
|
continue;
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|
2019-06-12 11:26:45 +02:00
|
|
|
for (PtIndex = 0; PtIndex < SIZE_4KB / sizeof(*Pt); PtIndex++, Pt++) {
|
|
|
|
if (!IsAddressValid (Address, &Nx)) {
|
|
|
|
*Pt = *Pt & (INTN)(INT32)(~PAGE_ATTRIBUTE_BITS);
|
|
|
|
}
|
|
|
|
if (Nx && mXdSupported) {
|
|
|
|
*Pt = *Pt | IA32_PG_NX;
|
|
|
|
}
|
|
|
|
Address += SIZE_4KB;
|
|
|
|
} // end for PT
|
|
|
|
} // end if PS
|
|
|
|
} // end for PD
|
|
|
|
} // end for PDPT
|
|
|
|
} // end for PML4
|
|
|
|
} // end for PML5
|
2015-10-19 21:12:53 +02:00
|
|
|
|
|
|
|
//
|
|
|
|
// Flush TLB
|
|
|
|
//
|
|
|
|
CpuFlushTlb ();
|
|
|
|
DEBUG ((EFI_D_INFO, "Patch page table done!\n"));
|
|
|
|
//
|
|
|
|
// Set execute-disable flag
|
|
|
|
//
|
|
|
|
mXdEnabled = TRUE;
|
|
|
|
|
|
|
|
return ;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
To get system port address of the SMI Command Port in FADT table.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
GetSmiCommandPort (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
|
|
|
EFI_ACPI_2_0_FIXED_ACPI_DESCRIPTION_TABLE *Fadt;
|
|
|
|
|
2018-06-08 10:41:07 +02:00
|
|
|
Fadt = (EFI_ACPI_2_0_FIXED_ACPI_DESCRIPTION_TABLE *) EfiLocateFirstAcpiTable (
|
|
|
|
EFI_ACPI_2_0_FIXED_ACPI_DESCRIPTION_TABLE_SIGNATURE
|
|
|
|
);
|
2015-10-19 21:12:53 +02:00
|
|
|
ASSERT (Fadt != NULL);
|
|
|
|
|
|
|
|
mSmiCommandPort = Fadt->SmiCmd;
|
|
|
|
DEBUG ((EFI_D_INFO, "mSmiCommandPort = %x\n", mSmiCommandPort));
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Updates page table to make some memory ranges (like system memory) absent
|
|
|
|
and make some memory ranges (like MMIO) present and execute disable. It also
|
|
|
|
update 2MB-page to 4KB-page for some memory ranges.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
SmmProfileStart (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
|
|
|
//
|
|
|
|
// The flag indicates SMM profile starts to work.
|
|
|
|
//
|
|
|
|
mSmmProfileStart = TRUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Initialize SMM profile in SmmReadyToLock protocol callback function.
|
|
|
|
|
|
|
|
@param Protocol Points to the protocol's unique identifier.
|
|
|
|
@param Interface Points to the interface instance.
|
|
|
|
@param Handle The handle on which the interface was installed.
|
|
|
|
|
|
|
|
@retval EFI_SUCCESS SmmReadyToLock protocol callback runs successfully.
|
|
|
|
**/
|
|
|
|
EFI_STATUS
|
|
|
|
EFIAPI
|
|
|
|
InitSmmProfileCallBack (
|
|
|
|
IN CONST EFI_GUID *Protocol,
|
|
|
|
IN VOID *Interface,
|
|
|
|
IN EFI_HANDLE Handle
|
|
|
|
)
|
|
|
|
{
|
|
|
|
//
|
|
|
|
// Save to variable so that SMM profile data can be found.
|
|
|
|
//
|
2016-03-18 20:56:04 +01:00
|
|
|
gRT->SetVariable (
|
|
|
|
SMM_PROFILE_NAME,
|
|
|
|
&gEfiCallerIdGuid,
|
|
|
|
EFI_VARIABLE_BOOTSERVICE_ACCESS | EFI_VARIABLE_RUNTIME_ACCESS,
|
|
|
|
sizeof(mSmmProfileBase),
|
|
|
|
&mSmmProfileBase
|
|
|
|
);
|
2015-10-19 21:12:53 +02:00
|
|
|
|
|
|
|
//
|
|
|
|
// Get Software SMI from FADT
|
|
|
|
//
|
|
|
|
GetSmiCommandPort ();
|
|
|
|
|
|
|
|
//
|
|
|
|
// Initialize protected memory range for patching page table later.
|
|
|
|
//
|
|
|
|
InitProtectedMemRange ();
|
|
|
|
|
|
|
|
return EFI_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Initialize SMM profile data structures.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
InitSmmProfileInternal (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
|
|
|
EFI_STATUS Status;
|
|
|
|
EFI_PHYSICAL_ADDRESS Base;
|
|
|
|
VOID *Registration;
|
|
|
|
UINTN Index;
|
|
|
|
UINTN MsrDsAreaSizePerCpu;
|
|
|
|
UINTN TotalSize;
|
|
|
|
|
UefiCpuPkg/PiSmmCpuDxeSmm: handle dynamic PcdCpuMaxLogicalProcessorNumber
"UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make
PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not
take this into account everywhere. As soon as a platform turns the PCD
into a dynamic one, at least S3 fails. When the PCD is dynamic, all
PcdGet() calls translate into PCD DXE protocol calls, which are only
permitted at boot time, not at runtime or during S3 resume.
We already have a variable called mMaxNumberOfCpus; it is initialized in
the entry point function like this:
> //
> // If support CPU hot plug, we need to allocate resources for possibly
> // hot-added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
> mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber);
> } else {
> mMaxNumberOfCpus = mNumberOfCpus;
> }
There's another use of the PCD a bit higher up, also in the entry point
function:
> //
> // Use MP Services Protocol to retrieve the number of processors and
> // number of enabled processors
> //
> Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus,
> &NumberOfEnabledProcessors);
> ASSERT_EFI_ERROR (Status);
> ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber));
Preserve these calls in the entry point function, and replace all other
uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with
mMaxNumberOfCpus.
For PcdCpuHotPlugSupport==TRUE, this is an unobservable change.
For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer
allocate resources needlessly for CPUs that can never appear in the
system.
PcdCpuMaxLogicalProcessorNumber is also retrieved in
"UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in
the library instance constructor, which runs even before the entry point
function is called.
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-11-24 20:49:43 +01:00
|
|
|
mPFEntryCount = (UINTN *)AllocateZeroPool (sizeof (UINTN) * mMaxNumberOfCpus);
|
2015-10-19 21:12:53 +02:00
|
|
|
ASSERT (mPFEntryCount != NULL);
|
|
|
|
mLastPFEntryValue = (UINT64 (*)[MAX_PF_ENTRY_COUNT])AllocateZeroPool (
|
UefiCpuPkg/PiSmmCpuDxeSmm: handle dynamic PcdCpuMaxLogicalProcessorNumber
"UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make
PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not
take this into account everywhere. As soon as a platform turns the PCD
into a dynamic one, at least S3 fails. When the PCD is dynamic, all
PcdGet() calls translate into PCD DXE protocol calls, which are only
permitted at boot time, not at runtime or during S3 resume.
We already have a variable called mMaxNumberOfCpus; it is initialized in
the entry point function like this:
> //
> // If support CPU hot plug, we need to allocate resources for possibly
> // hot-added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
> mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber);
> } else {
> mMaxNumberOfCpus = mNumberOfCpus;
> }
There's another use of the PCD a bit higher up, also in the entry point
function:
> //
> // Use MP Services Protocol to retrieve the number of processors and
> // number of enabled processors
> //
> Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus,
> &NumberOfEnabledProcessors);
> ASSERT_EFI_ERROR (Status);
> ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber));
Preserve these calls in the entry point function, and replace all other
uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with
mMaxNumberOfCpus.
For PcdCpuHotPlugSupport==TRUE, this is an unobservable change.
For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer
allocate resources needlessly for CPUs that can never appear in the
system.
PcdCpuMaxLogicalProcessorNumber is also retrieved in
"UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in
the library instance constructor, which runs even before the entry point
function is called.
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-11-24 20:49:43 +01:00
|
|
|
sizeof (mLastPFEntryValue[0]) * mMaxNumberOfCpus);
|
2015-10-19 21:12:53 +02:00
|
|
|
ASSERT (mLastPFEntryValue != NULL);
|
|
|
|
mLastPFEntryPointer = (UINT64 *(*)[MAX_PF_ENTRY_COUNT])AllocateZeroPool (
|
UefiCpuPkg/PiSmmCpuDxeSmm: handle dynamic PcdCpuMaxLogicalProcessorNumber
"UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make
PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not
take this into account everywhere. As soon as a platform turns the PCD
into a dynamic one, at least S3 fails. When the PCD is dynamic, all
PcdGet() calls translate into PCD DXE protocol calls, which are only
permitted at boot time, not at runtime or during S3 resume.
We already have a variable called mMaxNumberOfCpus; it is initialized in
the entry point function like this:
> //
> // If support CPU hot plug, we need to allocate resources for possibly
> // hot-added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
> mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber);
> } else {
> mMaxNumberOfCpus = mNumberOfCpus;
> }
There's another use of the PCD a bit higher up, also in the entry point
function:
> //
> // Use MP Services Protocol to retrieve the number of processors and
> // number of enabled processors
> //
> Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus,
> &NumberOfEnabledProcessors);
> ASSERT_EFI_ERROR (Status);
> ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber));
Preserve these calls in the entry point function, and replace all other
uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with
mMaxNumberOfCpus.
For PcdCpuHotPlugSupport==TRUE, this is an unobservable change.
For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer
allocate resources needlessly for CPUs that can never appear in the
system.
PcdCpuMaxLogicalProcessorNumber is also retrieved in
"UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in
the library instance constructor, which runs even before the entry point
function is called.
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-11-24 20:49:43 +01:00
|
|
|
sizeof (mLastPFEntryPointer[0]) * mMaxNumberOfCpus);
|
2015-10-19 21:12:53 +02:00
|
|
|
ASSERT (mLastPFEntryPointer != NULL);
|
|
|
|
|
|
|
|
//
|
|
|
|
// Allocate memory for SmmProfile below 4GB.
|
|
|
|
// The base address
|
|
|
|
//
|
|
|
|
mSmmProfileSize = PcdGet32 (PcdCpuSmmProfileSize);
|
|
|
|
ASSERT ((mSmmProfileSize & 0xFFF) == 0);
|
|
|
|
|
|
|
|
if (mBtsSupported) {
|
|
|
|
TotalSize = mSmmProfileSize + mMsrDsAreaSize;
|
|
|
|
} else {
|
|
|
|
TotalSize = mSmmProfileSize;
|
|
|
|
}
|
|
|
|
|
|
|
|
Base = 0xFFFFFFFF;
|
|
|
|
Status = gBS->AllocatePages (
|
|
|
|
AllocateMaxAddress,
|
|
|
|
EfiReservedMemoryType,
|
|
|
|
EFI_SIZE_TO_PAGES (TotalSize),
|
|
|
|
&Base
|
|
|
|
);
|
|
|
|
ASSERT_EFI_ERROR (Status);
|
|
|
|
ZeroMem ((VOID *)(UINTN)Base, TotalSize);
|
|
|
|
mSmmProfileBase = (SMM_PROFILE_HEADER *)(UINTN)Base;
|
|
|
|
|
|
|
|
//
|
|
|
|
// Initialize SMM profile data header.
|
|
|
|
//
|
|
|
|
mSmmProfileBase->HeaderSize = sizeof (SMM_PROFILE_HEADER);
|
|
|
|
mSmmProfileBase->MaxDataEntries = (UINT64)((mSmmProfileSize - sizeof(SMM_PROFILE_HEADER)) / sizeof (SMM_PROFILE_ENTRY));
|
|
|
|
mSmmProfileBase->MaxDataSize = MultU64x64 (mSmmProfileBase->MaxDataEntries, sizeof(SMM_PROFILE_ENTRY));
|
|
|
|
mSmmProfileBase->CurDataEntries = 0;
|
|
|
|
mSmmProfileBase->CurDataSize = 0;
|
|
|
|
mSmmProfileBase->TsegStart = mCpuHotPlugData.SmrrBase;
|
|
|
|
mSmmProfileBase->TsegSize = mCpuHotPlugData.SmrrSize;
|
|
|
|
mSmmProfileBase->NumSmis = 0;
|
|
|
|
mSmmProfileBase->NumCpus = gSmmCpuPrivate->SmmCoreEntryContext.NumberOfCpus;
|
|
|
|
|
|
|
|
if (mBtsSupported) {
|
UefiCpuPkg/PiSmmCpuDxeSmm: handle dynamic PcdCpuMaxLogicalProcessorNumber
"UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make
PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not
take this into account everywhere. As soon as a platform turns the PCD
into a dynamic one, at least S3 fails. When the PCD is dynamic, all
PcdGet() calls translate into PCD DXE protocol calls, which are only
permitted at boot time, not at runtime or during S3 resume.
We already have a variable called mMaxNumberOfCpus; it is initialized in
the entry point function like this:
> //
> // If support CPU hot plug, we need to allocate resources for possibly
> // hot-added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
> mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber);
> } else {
> mMaxNumberOfCpus = mNumberOfCpus;
> }
There's another use of the PCD a bit higher up, also in the entry point
function:
> //
> // Use MP Services Protocol to retrieve the number of processors and
> // number of enabled processors
> //
> Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus,
> &NumberOfEnabledProcessors);
> ASSERT_EFI_ERROR (Status);
> ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber));
Preserve these calls in the entry point function, and replace all other
uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with
mMaxNumberOfCpus.
For PcdCpuHotPlugSupport==TRUE, this is an unobservable change.
For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer
allocate resources needlessly for CPUs that can never appear in the
system.
PcdCpuMaxLogicalProcessorNumber is also retrieved in
"UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in
the library instance constructor, which runs even before the entry point
function is called.
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-11-24 20:49:43 +01:00
|
|
|
mMsrDsArea = (MSR_DS_AREA_STRUCT **)AllocateZeroPool (sizeof (MSR_DS_AREA_STRUCT *) * mMaxNumberOfCpus);
|
2015-10-19 21:12:53 +02:00
|
|
|
ASSERT (mMsrDsArea != NULL);
|
UefiCpuPkg/PiSmmCpuDxeSmm: handle dynamic PcdCpuMaxLogicalProcessorNumber
"UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make
PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not
take this into account everywhere. As soon as a platform turns the PCD
into a dynamic one, at least S3 fails. When the PCD is dynamic, all
PcdGet() calls translate into PCD DXE protocol calls, which are only
permitted at boot time, not at runtime or during S3 resume.
We already have a variable called mMaxNumberOfCpus; it is initialized in
the entry point function like this:
> //
> // If support CPU hot plug, we need to allocate resources for possibly
> // hot-added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
> mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber);
> } else {
> mMaxNumberOfCpus = mNumberOfCpus;
> }
There's another use of the PCD a bit higher up, also in the entry point
function:
> //
> // Use MP Services Protocol to retrieve the number of processors and
> // number of enabled processors
> //
> Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus,
> &NumberOfEnabledProcessors);
> ASSERT_EFI_ERROR (Status);
> ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber));
Preserve these calls in the entry point function, and replace all other
uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with
mMaxNumberOfCpus.
For PcdCpuHotPlugSupport==TRUE, this is an unobservable change.
For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer
allocate resources needlessly for CPUs that can never appear in the
system.
PcdCpuMaxLogicalProcessorNumber is also retrieved in
"UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in
the library instance constructor, which runs even before the entry point
function is called.
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-11-24 20:49:43 +01:00
|
|
|
mMsrBTSRecord = (BRANCH_TRACE_RECORD **)AllocateZeroPool (sizeof (BRANCH_TRACE_RECORD *) * mMaxNumberOfCpus);
|
2015-10-19 21:12:53 +02:00
|
|
|
ASSERT (mMsrBTSRecord != NULL);
|
UefiCpuPkg/PiSmmCpuDxeSmm: handle dynamic PcdCpuMaxLogicalProcessorNumber
"UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make
PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not
take this into account everywhere. As soon as a platform turns the PCD
into a dynamic one, at least S3 fails. When the PCD is dynamic, all
PcdGet() calls translate into PCD DXE protocol calls, which are only
permitted at boot time, not at runtime or during S3 resume.
We already have a variable called mMaxNumberOfCpus; it is initialized in
the entry point function like this:
> //
> // If support CPU hot plug, we need to allocate resources for possibly
> // hot-added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
> mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber);
> } else {
> mMaxNumberOfCpus = mNumberOfCpus;
> }
There's another use of the PCD a bit higher up, also in the entry point
function:
> //
> // Use MP Services Protocol to retrieve the number of processors and
> // number of enabled processors
> //
> Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus,
> &NumberOfEnabledProcessors);
> ASSERT_EFI_ERROR (Status);
> ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber));
Preserve these calls in the entry point function, and replace all other
uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with
mMaxNumberOfCpus.
For PcdCpuHotPlugSupport==TRUE, this is an unobservable change.
For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer
allocate resources needlessly for CPUs that can never appear in the
system.
PcdCpuMaxLogicalProcessorNumber is also retrieved in
"UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in
the library instance constructor, which runs even before the entry point
function is called.
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-11-24 20:49:43 +01:00
|
|
|
mMsrPEBSRecord = (PEBS_RECORD **)AllocateZeroPool (sizeof (PEBS_RECORD *) * mMaxNumberOfCpus);
|
2015-10-19 21:12:53 +02:00
|
|
|
ASSERT (mMsrPEBSRecord != NULL);
|
|
|
|
|
|
|
|
mMsrDsAreaBase = (MSR_DS_AREA_STRUCT *)((UINTN)Base + mSmmProfileSize);
|
UefiCpuPkg/PiSmmCpuDxeSmm: handle dynamic PcdCpuMaxLogicalProcessorNumber
"UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make
PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not
take this into account everywhere. As soon as a platform turns the PCD
into a dynamic one, at least S3 fails. When the PCD is dynamic, all
PcdGet() calls translate into PCD DXE protocol calls, which are only
permitted at boot time, not at runtime or during S3 resume.
We already have a variable called mMaxNumberOfCpus; it is initialized in
the entry point function like this:
> //
> // If support CPU hot plug, we need to allocate resources for possibly
> // hot-added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
> mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber);
> } else {
> mMaxNumberOfCpus = mNumberOfCpus;
> }
There's another use of the PCD a bit higher up, also in the entry point
function:
> //
> // Use MP Services Protocol to retrieve the number of processors and
> // number of enabled processors
> //
> Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus,
> &NumberOfEnabledProcessors);
> ASSERT_EFI_ERROR (Status);
> ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber));
Preserve these calls in the entry point function, and replace all other
uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with
mMaxNumberOfCpus.
For PcdCpuHotPlugSupport==TRUE, this is an unobservable change.
For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer
allocate resources needlessly for CPUs that can never appear in the
system.
PcdCpuMaxLogicalProcessorNumber is also retrieved in
"UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in
the library instance constructor, which runs even before the entry point
function is called.
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-11-24 20:49:43 +01:00
|
|
|
MsrDsAreaSizePerCpu = mMsrDsAreaSize / mMaxNumberOfCpus;
|
2015-10-19 21:12:53 +02:00
|
|
|
mBTSRecordNumber = (MsrDsAreaSizePerCpu - sizeof(PEBS_RECORD) * PEBS_RECORD_NUMBER - sizeof(MSR_DS_AREA_STRUCT)) / sizeof(BRANCH_TRACE_RECORD);
|
UefiCpuPkg/PiSmmCpuDxeSmm: handle dynamic PcdCpuMaxLogicalProcessorNumber
"UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make
PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not
take this into account everywhere. As soon as a platform turns the PCD
into a dynamic one, at least S3 fails. When the PCD is dynamic, all
PcdGet() calls translate into PCD DXE protocol calls, which are only
permitted at boot time, not at runtime or during S3 resume.
We already have a variable called mMaxNumberOfCpus; it is initialized in
the entry point function like this:
> //
> // If support CPU hot plug, we need to allocate resources for possibly
> // hot-added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
> mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber);
> } else {
> mMaxNumberOfCpus = mNumberOfCpus;
> }
There's another use of the PCD a bit higher up, also in the entry point
function:
> //
> // Use MP Services Protocol to retrieve the number of processors and
> // number of enabled processors
> //
> Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus,
> &NumberOfEnabledProcessors);
> ASSERT_EFI_ERROR (Status);
> ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber));
Preserve these calls in the entry point function, and replace all other
uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with
mMaxNumberOfCpus.
For PcdCpuHotPlugSupport==TRUE, this is an unobservable change.
For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer
allocate resources needlessly for CPUs that can never appear in the
system.
PcdCpuMaxLogicalProcessorNumber is also retrieved in
"UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in
the library instance constructor, which runs even before the entry point
function is called.
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-11-24 20:49:43 +01:00
|
|
|
for (Index = 0; Index < mMaxNumberOfCpus; Index++) {
|
2015-10-19 21:12:53 +02:00
|
|
|
mMsrDsArea[Index] = (MSR_DS_AREA_STRUCT *)((UINTN)mMsrDsAreaBase + MsrDsAreaSizePerCpu * Index);
|
|
|
|
mMsrBTSRecord[Index] = (BRANCH_TRACE_RECORD *)((UINTN)mMsrDsArea[Index] + sizeof(MSR_DS_AREA_STRUCT));
|
|
|
|
mMsrPEBSRecord[Index] = (PEBS_RECORD *)((UINTN)mMsrDsArea[Index] + MsrDsAreaSizePerCpu - sizeof(PEBS_RECORD) * PEBS_RECORD_NUMBER);
|
|
|
|
|
|
|
|
mMsrDsArea[Index]->BTSBufferBase = (UINTN)mMsrBTSRecord[Index];
|
|
|
|
mMsrDsArea[Index]->BTSIndex = mMsrDsArea[Index]->BTSBufferBase;
|
|
|
|
mMsrDsArea[Index]->BTSAbsoluteMaximum = mMsrDsArea[Index]->BTSBufferBase + mBTSRecordNumber * sizeof(BRANCH_TRACE_RECORD) + 1;
|
|
|
|
mMsrDsArea[Index]->BTSInterruptThreshold = mMsrDsArea[Index]->BTSAbsoluteMaximum + 1;
|
|
|
|
|
|
|
|
mMsrDsArea[Index]->PEBSBufferBase = (UINTN)mMsrPEBSRecord[Index];
|
|
|
|
mMsrDsArea[Index]->PEBSIndex = mMsrDsArea[Index]->PEBSBufferBase;
|
|
|
|
mMsrDsArea[Index]->PEBSAbsoluteMaximum = mMsrDsArea[Index]->PEBSBufferBase + PEBS_RECORD_NUMBER * sizeof(PEBS_RECORD) + 1;
|
|
|
|
mMsrDsArea[Index]->PEBSInterruptThreshold = mMsrDsArea[Index]->PEBSAbsoluteMaximum + 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
mProtectionMemRange = mProtectionMemRangeTemplate;
|
|
|
|
mProtectionMemRangeCount = sizeof (mProtectionMemRangeTemplate) / sizeof (MEMORY_PROTECTION_RANGE);
|
|
|
|
|
|
|
|
//
|
|
|
|
// Update TSeg entry.
|
|
|
|
//
|
|
|
|
mProtectionMemRange[0].Range.Base = mCpuHotPlugData.SmrrBase;
|
|
|
|
mProtectionMemRange[0].Range.Top = mCpuHotPlugData.SmrrBase + mCpuHotPlugData.SmrrSize;
|
|
|
|
|
|
|
|
//
|
|
|
|
// Update SMM profile entry.
|
|
|
|
//
|
|
|
|
mProtectionMemRange[1].Range.Base = (EFI_PHYSICAL_ADDRESS)(UINTN)mSmmProfileBase;
|
|
|
|
mProtectionMemRange[1].Range.Top = (EFI_PHYSICAL_ADDRESS)(UINTN)mSmmProfileBase + TotalSize;
|
|
|
|
|
|
|
|
//
|
|
|
|
// Allocate memory reserved for creating 4KB pages.
|
|
|
|
//
|
|
|
|
InitPagesForPFHandler ();
|
|
|
|
|
|
|
|
//
|
|
|
|
// Start SMM profile when SmmReadyToLock protocol is installed.
|
|
|
|
//
|
|
|
|
Status = gSmst->SmmRegisterProtocolNotify (
|
|
|
|
&gEfiSmmReadyToLockProtocolGuid,
|
|
|
|
InitSmmProfileCallBack,
|
|
|
|
&Registration
|
|
|
|
);
|
|
|
|
ASSERT_EFI_ERROR (Status);
|
|
|
|
|
|
|
|
return ;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
UefiCpuPkg/PiSmmCpu: Add Shadow Stack Support for X86 SMM.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1521
We scan the SMM code with ROPgadget.
http://shell-storm.org/project/ROPgadget/
https://github.com/JonathanSalwan/ROPgadget/tree/master
This tool reports the gadget in SMM driver.
This patch enabled CET ShadowStack for X86 SMM.
If CET is supported, SMM will enable CET ShadowStack.
SMM CET will save the OS CET context at SmmEntry and
restore OS CET context at SmmExit.
Test:
1) test Intel internal platform (x64 only, CET enabled/disabled)
Boot test:
CET supported or not supported CPU
on CET supported platform
CET enabled/disabled
PcdCpuSmmCetEnable enabled/disabled
Single core/Multiple core
PcdCpuSmmStackGuard enabled/disabled
PcdCpuSmmProfileEnable enabled/disabled
PcdCpuSmmStaticPageTable enabled/disabled
CET exception test:
#CF generated with PcdCpuSmmStackGuard enabled/disabled.
Other exception test:
#PF for normal stack overflow
#PF for NX protection
#PF for RO protection
CET env test:
Launch SMM in CET enabled/disabled environment (DXE) - no impact to DXE
The test case can be found at
https://github.com/jyao1/SecurityEx/tree/master/ControlFlowPkg
2) test ovmf (both IA32 and X64 SMM, CET disabled only)
test OvmfIa32/Ovmf3264, with -D SMM_REQUIRE.
qemu-system-x86_64.exe -machine q35,smm=on -smp 4
-serial file:serial.log
-drive if=pflash,format=raw,unit=0,file=OVMF_CODE.fd,readonly=on
-drive if=pflash,format=raw,unit=1,file=OVMF_VARS.fd
QEMU emulator version 3.1.0 (v3.1.0-11736-g7a30e7adb0-dirty)
3) not tested
IA32 CET enabled platform
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yao Jiewen <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
2019-02-22 14:30:36 +01:00
|
|
|
Check if feature is supported by a processor.
|
2015-10-19 21:12:53 +02:00
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
CheckFeatureSupported (
|
2016-07-02 06:01:02 +02:00
|
|
|
VOID
|
2015-10-19 21:12:53 +02:00
|
|
|
)
|
|
|
|
{
|
2016-04-08 07:55:14 +02:00
|
|
|
UINT32 RegEax;
|
UefiCpuPkg/PiSmmCpu: Add Shadow Stack Support for X86 SMM.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1521
We scan the SMM code with ROPgadget.
http://shell-storm.org/project/ROPgadget/
https://github.com/JonathanSalwan/ROPgadget/tree/master
This tool reports the gadget in SMM driver.
This patch enabled CET ShadowStack for X86 SMM.
If CET is supported, SMM will enable CET ShadowStack.
SMM CET will save the OS CET context at SmmEntry and
restore OS CET context at SmmExit.
Test:
1) test Intel internal platform (x64 only, CET enabled/disabled)
Boot test:
CET supported or not supported CPU
on CET supported platform
CET enabled/disabled
PcdCpuSmmCetEnable enabled/disabled
Single core/Multiple core
PcdCpuSmmStackGuard enabled/disabled
PcdCpuSmmProfileEnable enabled/disabled
PcdCpuSmmStaticPageTable enabled/disabled
CET exception test:
#CF generated with PcdCpuSmmStackGuard enabled/disabled.
Other exception test:
#PF for normal stack overflow
#PF for NX protection
#PF for RO protection
CET env test:
Launch SMM in CET enabled/disabled environment (DXE) - no impact to DXE
The test case can be found at
https://github.com/jyao1/SecurityEx/tree/master/ControlFlowPkg
2) test ovmf (both IA32 and X64 SMM, CET disabled only)
test OvmfIa32/Ovmf3264, with -D SMM_REQUIRE.
qemu-system-x86_64.exe -machine q35,smm=on -smp 4
-serial file:serial.log
-drive if=pflash,format=raw,unit=0,file=OVMF_CODE.fd,readonly=on
-drive if=pflash,format=raw,unit=1,file=OVMF_VARS.fd
QEMU emulator version 3.1.0 (v3.1.0-11736-g7a30e7adb0-dirty)
3) not tested
IA32 CET enabled platform
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yao Jiewen <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
2019-02-22 14:30:36 +01:00
|
|
|
UINT32 RegEcx;
|
2016-04-08 07:55:14 +02:00
|
|
|
UINT32 RegEdx;
|
|
|
|
MSR_IA32_MISC_ENABLE_REGISTER MiscEnableMsr;
|
2015-10-19 21:12:53 +02:00
|
|
|
|
UefiCpuPkg/PiSmmCpu: Add Shadow Stack Support for X86 SMM.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1521
We scan the SMM code with ROPgadget.
http://shell-storm.org/project/ROPgadget/
https://github.com/JonathanSalwan/ROPgadget/tree/master
This tool reports the gadget in SMM driver.
This patch enabled CET ShadowStack for X86 SMM.
If CET is supported, SMM will enable CET ShadowStack.
SMM CET will save the OS CET context at SmmEntry and
restore OS CET context at SmmExit.
Test:
1) test Intel internal platform (x64 only, CET enabled/disabled)
Boot test:
CET supported or not supported CPU
on CET supported platform
CET enabled/disabled
PcdCpuSmmCetEnable enabled/disabled
Single core/Multiple core
PcdCpuSmmStackGuard enabled/disabled
PcdCpuSmmProfileEnable enabled/disabled
PcdCpuSmmStaticPageTable enabled/disabled
CET exception test:
#CF generated with PcdCpuSmmStackGuard enabled/disabled.
Other exception test:
#PF for normal stack overflow
#PF for NX protection
#PF for RO protection
CET env test:
Launch SMM in CET enabled/disabled environment (DXE) - no impact to DXE
The test case can be found at
https://github.com/jyao1/SecurityEx/tree/master/ControlFlowPkg
2) test ovmf (both IA32 and X64 SMM, CET disabled only)
test OvmfIa32/Ovmf3264, with -D SMM_REQUIRE.
qemu-system-x86_64.exe -machine q35,smm=on -smp 4
-serial file:serial.log
-drive if=pflash,format=raw,unit=0,file=OVMF_CODE.fd,readonly=on
-drive if=pflash,format=raw,unit=1,file=OVMF_VARS.fd
QEMU emulator version 3.1.0 (v3.1.0-11736-g7a30e7adb0-dirty)
3) not tested
IA32 CET enabled platform
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yao Jiewen <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
2019-02-22 14:30:36 +01:00
|
|
|
if ((PcdGet32 (PcdControlFlowEnforcementPropertyMask) != 0) && mCetSupported) {
|
|
|
|
AsmCpuid (CPUID_EXTENDED_FUNCTION, &RegEax, NULL, NULL, NULL);
|
|
|
|
if (RegEax <= CPUID_EXTENDED_FUNCTION) {
|
|
|
|
mCetSupported = FALSE;
|
|
|
|
PatchInstructionX86 (mPatchCetSupported, mCetSupported, 1);
|
|
|
|
}
|
|
|
|
AsmCpuidEx (CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS, CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS_SUB_LEAF_INFO, NULL, NULL, &RegEcx, NULL);
|
|
|
|
if ((RegEcx & CPUID_CET_SS) == 0) {
|
|
|
|
mCetSupported = FALSE;
|
|
|
|
PatchInstructionX86 (mPatchCetSupported, mCetSupported, 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-10-19 21:12:53 +02:00
|
|
|
if (mXdSupported) {
|
|
|
|
AsmCpuid (CPUID_EXTENDED_FUNCTION, &RegEax, NULL, NULL, NULL);
|
|
|
|
if (RegEax <= CPUID_EXTENDED_FUNCTION) {
|
|
|
|
//
|
|
|
|
// Extended CPUID functions are not supported on this processor.
|
|
|
|
//
|
|
|
|
mXdSupported = FALSE;
|
UefiCpuPkg/PiSmmCpuDxeSmm: patch "XdSupported" with PatchInstructionX86()
"mXdSupported" is a global BOOLEAN variable, initialized to TRUE. The
CheckFeatureSupported() function is executed on all processors (not
concurrently though), called from SmmInitHandler(). If XD support is found
to be missing on any CPU, then "mXdSupported" is set to FALSE, and further
processors omit the check. Afterwards, "mXdSupported" is read by several
assembly and C code locations.
The tricky part is *where* "mXdSupported" is allocated (defined):
- Before commit 717fb60443fb ("UefiCpuPkg/PiSmmCpuDxeSmm: Add paging
protection.", 2016-11-17), it used to be a normal global variable,
defined (allocated) in "SmmProfile.c".
- With said commit, we moved the definition (allocation) of "mXdSupported"
into "SmiEntry.nasm". The variable was defined over the last byte of a
"mov al, 1" instruction, so that setting it to FALSE in
CheckFeatureSupported() would patch the instruction to "mov al, 0". The
subsequent conditional jump would change behavior, plus all further read
references to "mXdSupported" (in C and assembly code) would read back
the source (imm8) operand of the patched MOV instruction as data.
This trick required that the MOV instruction be encoded with DB.
In order to get rid of the DB, we have to split both roles: we need a
label for the code patching, and "mXdSupported" has to be defined
(allocated) independently of the code patching. Of course, their values
must always remain in sync.
(1) Reinstate the "mXdSupported" definition and initialization in
"SmmProfile.c" from before commit 717fb60443fb. Change the assembly
language definition ("global") to a declaration ("extern").
(2) Define the "gPatchXdSupported" label (type X86_ASSEMBLY_PATCH_LABEL)
in "SmiEntry.nasm", and add the C-language declaration to
"SmmProfileInternal.h". Replace the DB with the MOV mnemonic (keeping
the imm8 source operand with value 1).
(3) In CheckFeatureSupported(), whenever "mXdSupported" is set to FALSE,
patch the assembly code in sync, with PatchInstructionX86().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 00:17:13 +01:00
|
|
|
PatchInstructionX86 (gPatchXdSupported, mXdSupported, 1);
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
AsmCpuid (CPUID_EXTENDED_CPU_SIG, NULL, NULL, NULL, &RegEdx);
|
|
|
|
if ((RegEdx & CPUID1_EDX_XD_SUPPORT) == 0) {
|
|
|
|
//
|
|
|
|
// Execute Disable Bit feature is not supported on this processor.
|
|
|
|
//
|
|
|
|
mXdSupported = FALSE;
|
UefiCpuPkg/PiSmmCpuDxeSmm: patch "XdSupported" with PatchInstructionX86()
"mXdSupported" is a global BOOLEAN variable, initialized to TRUE. The
CheckFeatureSupported() function is executed on all processors (not
concurrently though), called from SmmInitHandler(). If XD support is found
to be missing on any CPU, then "mXdSupported" is set to FALSE, and further
processors omit the check. Afterwards, "mXdSupported" is read by several
assembly and C code locations.
The tricky part is *where* "mXdSupported" is allocated (defined):
- Before commit 717fb60443fb ("UefiCpuPkg/PiSmmCpuDxeSmm: Add paging
protection.", 2016-11-17), it used to be a normal global variable,
defined (allocated) in "SmmProfile.c".
- With said commit, we moved the definition (allocation) of "mXdSupported"
into "SmiEntry.nasm". The variable was defined over the last byte of a
"mov al, 1" instruction, so that setting it to FALSE in
CheckFeatureSupported() would patch the instruction to "mov al, 0". The
subsequent conditional jump would change behavior, plus all further read
references to "mXdSupported" (in C and assembly code) would read back
the source (imm8) operand of the patched MOV instruction as data.
This trick required that the MOV instruction be encoded with DB.
In order to get rid of the DB, we have to split both roles: we need a
label for the code patching, and "mXdSupported" has to be defined
(allocated) independently of the code patching. Of course, their values
must always remain in sync.
(1) Reinstate the "mXdSupported" definition and initialization in
"SmmProfile.c" from before commit 717fb60443fb. Change the assembly
language definition ("global") to a declaration ("extern").
(2) Define the "gPatchXdSupported" label (type X86_ASSEMBLY_PATCH_LABEL)
in "SmiEntry.nasm", and add the C-language declaration to
"SmmProfileInternal.h". Replace the DB with the MOV mnemonic (keeping
the imm8 source operand with value 1).
(3) In CheckFeatureSupported(), whenever "mXdSupported" is set to FALSE,
patch the assembly code in sync, with PatchInstructionX86().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 00:17:13 +01:00
|
|
|
PatchInstructionX86 (gPatchXdSupported, mXdSupported, 1);
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|
2020-06-22 15:18:25 +02:00
|
|
|
|
|
|
|
if (StandardSignatureIsAuthenticAMD ()) {
|
|
|
|
//
|
|
|
|
// AMD processors do not support MSR_IA32_MISC_ENABLE
|
|
|
|
//
|
|
|
|
PatchInstructionX86 (gPatchMsrIa32MiscEnableSupported, FALSE, 1);
|
|
|
|
}
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if (mBtsSupported) {
|
|
|
|
AsmCpuid (CPUID_VERSION_INFO, NULL, NULL, NULL, &RegEdx);
|
|
|
|
if ((RegEdx & CPUID1_EDX_BTS_AVAILABLE) != 0) {
|
|
|
|
//
|
|
|
|
// Per IA32 manuals:
|
|
|
|
// When CPUID.1:EDX[21] is set, the following BTS facilities are available:
|
|
|
|
// 1. The BTS_UNAVAILABLE flag in the IA32_MISC_ENABLE MSR indicates the
|
|
|
|
// availability of the BTS facilities, including the ability to set the BTS and
|
|
|
|
// BTINT bits in the MSR_DEBUGCTLA MSR.
|
|
|
|
// 2. The IA32_DS_AREA MSR can be programmed to point to the DS save area.
|
|
|
|
//
|
2016-04-08 07:55:14 +02:00
|
|
|
MiscEnableMsr.Uint64 = AsmReadMsr64 (MSR_IA32_MISC_ENABLE);
|
|
|
|
if (MiscEnableMsr.Bits.BTS == 1) {
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
2016-04-08 07:55:14 +02:00
|
|
|
// BTS facilities is not supported if MSR_IA32_MISC_ENABLE.BTS bit is set.
|
2015-10-19 21:12:53 +02:00
|
|
|
//
|
|
|
|
mBtsSupported = FALSE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Enable single step.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
ActivateSingleStepDB (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINTN Dr6;
|
|
|
|
|
|
|
|
Dr6 = AsmReadDr6 ();
|
|
|
|
if ((Dr6 & DR6_SINGLE_STEP) != 0) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
Dr6 |= DR6_SINGLE_STEP;
|
|
|
|
AsmWriteDr6 (Dr6);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Enable last branch.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
ActivateLBR (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINT64 DebugCtl;
|
|
|
|
|
|
|
|
DebugCtl = AsmReadMsr64 (MSR_DEBUG_CTL);
|
|
|
|
if ((DebugCtl & MSR_DEBUG_CTL_LBR) != 0) {
|
|
|
|
return ;
|
|
|
|
}
|
|
|
|
DebugCtl |= MSR_DEBUG_CTL_LBR;
|
|
|
|
AsmWriteMsr64 (MSR_DEBUG_CTL, DebugCtl);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Enable branch trace store.
|
|
|
|
|
|
|
|
@param CpuIndex The index of the processor.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
ActivateBTS (
|
|
|
|
IN UINTN CpuIndex
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINT64 DebugCtl;
|
|
|
|
|
|
|
|
DebugCtl = AsmReadMsr64 (MSR_DEBUG_CTL);
|
|
|
|
if ((DebugCtl & MSR_DEBUG_CTL_BTS) != 0) {
|
|
|
|
return ;
|
|
|
|
}
|
|
|
|
|
|
|
|
AsmWriteMsr64 (MSR_DS_AREA, (UINT64)(UINTN)mMsrDsArea[CpuIndex]);
|
|
|
|
DebugCtl |= (UINT64)(MSR_DEBUG_CTL_BTS | MSR_DEBUG_CTL_TR);
|
|
|
|
DebugCtl &= ~((UINT64)MSR_DEBUG_CTL_BTINT);
|
|
|
|
AsmWriteMsr64 (MSR_DEBUG_CTL, DebugCtl);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Increase SMI number in each SMI entry.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
SmmProfileRecordSmiNum (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
|
|
|
if (mSmmProfileStart) {
|
|
|
|
mSmmProfileBase->NumSmis++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Initialize processor environment for SMM profile.
|
|
|
|
|
|
|
|
@param CpuIndex The index of the processor.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
ActivateSmmProfile (
|
|
|
|
IN UINTN CpuIndex
|
|
|
|
)
|
|
|
|
{
|
|
|
|
//
|
|
|
|
// Enable Single Step DB#
|
|
|
|
//
|
|
|
|
ActivateSingleStepDB ();
|
|
|
|
|
|
|
|
if (mBtsSupported) {
|
|
|
|
//
|
|
|
|
// We can not get useful information from LER, so we have to use BTS.
|
|
|
|
//
|
|
|
|
ActivateLBR ();
|
|
|
|
|
|
|
|
//
|
|
|
|
// Enable BTS
|
|
|
|
//
|
|
|
|
ActivateBTS (CpuIndex);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Initialize SMM profile in SMM CPU entry point.
|
|
|
|
|
|
|
|
@param[in] Cr3 The base address of the page tables to use in SMM.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
InitSmmProfile (
|
|
|
|
UINT32 Cr3
|
|
|
|
)
|
|
|
|
{
|
|
|
|
//
|
|
|
|
// Save Cr3
|
|
|
|
//
|
|
|
|
mSmmProfileCr3 = Cr3;
|
|
|
|
|
|
|
|
//
|
|
|
|
// Skip SMM profile initialization if feature is disabled
|
|
|
|
//
|
2018-08-20 05:35:58 +02:00
|
|
|
if (!FeaturePcdGet (PcdCpuSmmProfileEnable) &&
|
|
|
|
!HEAP_GUARD_NONSTOP_MODE &&
|
|
|
|
!NULL_DETECTION_NONSTOP_MODE) {
|
2015-10-19 21:12:53 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
//
|
|
|
|
// Initialize SmmProfile here
|
|
|
|
//
|
|
|
|
InitSmmProfileInternal ();
|
|
|
|
|
|
|
|
//
|
|
|
|
// Initialize profile IDT.
|
|
|
|
//
|
|
|
|
InitIdtr ();
|
2018-08-20 05:35:58 +02:00
|
|
|
|
|
|
|
//
|
|
|
|
// Tell #PF handler to prepare a #DB subsequently.
|
|
|
|
//
|
|
|
|
mSetupDebugTrap = TRUE;
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Update page table to map the memory correctly in order to make the instruction
|
|
|
|
which caused page fault execute successfully. And it also save the original page
|
|
|
|
table to be restored in single-step exception.
|
|
|
|
|
|
|
|
@param PageTable PageTable Address.
|
|
|
|
@param PFAddress The memory address which caused page fault exception.
|
|
|
|
@param CpuIndex The index of the processor.
|
|
|
|
@param ErrorCode The Error code of exception.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
RestorePageTableBelow4G (
|
|
|
|
UINT64 *PageTable,
|
|
|
|
UINT64 PFAddress,
|
|
|
|
UINTN CpuIndex,
|
|
|
|
UINTN ErrorCode
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINTN PTIndex;
|
|
|
|
UINTN PFIndex;
|
2019-06-12 11:26:45 +02:00
|
|
|
IA32_CR4 Cr4;
|
|
|
|
BOOLEAN Enable5LevelPaging;
|
|
|
|
|
|
|
|
Cr4.UintN = AsmReadCr4 ();
|
|
|
|
Enable5LevelPaging = (BOOLEAN) (Cr4.Bits.LA57 == 1);
|
|
|
|
|
|
|
|
//
|
|
|
|
// PML5
|
|
|
|
//
|
|
|
|
if (Enable5LevelPaging) {
|
|
|
|
PTIndex = (UINTN)BitFieldRead64 (PFAddress, 48, 56);
|
|
|
|
ASSERT (PageTable[PTIndex] != 0);
|
|
|
|
PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
|
|
|
|
}
|
2015-10-19 21:12:53 +02:00
|
|
|
|
|
|
|
//
|
|
|
|
// PML4
|
|
|
|
//
|
|
|
|
if (sizeof(UINT64) == sizeof(UINTN)) {
|
|
|
|
PTIndex = (UINTN)BitFieldRead64 (PFAddress, 39, 47);
|
|
|
|
ASSERT (PageTable[PTIndex] != 0);
|
|
|
|
PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
|
|
|
|
}
|
|
|
|
|
|
|
|
//
|
|
|
|
// PDPTE
|
|
|
|
//
|
|
|
|
PTIndex = (UINTN)BitFieldRead64 (PFAddress, 30, 38);
|
|
|
|
ASSERT (PageTable[PTIndex] != 0);
|
|
|
|
PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
|
|
|
|
|
|
|
|
//
|
|
|
|
// PD
|
|
|
|
//
|
|
|
|
PTIndex = (UINTN)BitFieldRead64 (PFAddress, 21, 29);
|
|
|
|
if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
|
|
|
|
//
|
|
|
|
// Large page
|
|
|
|
//
|
|
|
|
|
|
|
|
//
|
|
|
|
// Record old entries with non-present status
|
|
|
|
// Old entries include the memory which instruction is at and the memory which instruction access.
|
|
|
|
//
|
|
|
|
//
|
|
|
|
ASSERT (mPFEntryCount[CpuIndex] < MAX_PF_ENTRY_COUNT);
|
|
|
|
if (mPFEntryCount[CpuIndex] < MAX_PF_ENTRY_COUNT) {
|
|
|
|
PFIndex = mPFEntryCount[CpuIndex];
|
|
|
|
mLastPFEntryValue[CpuIndex][PFIndex] = PageTable[PTIndex];
|
|
|
|
mLastPFEntryPointer[CpuIndex][PFIndex] = &PageTable[PTIndex];
|
|
|
|
mPFEntryCount[CpuIndex]++;
|
|
|
|
}
|
|
|
|
|
|
|
|
//
|
|
|
|
// Set new entry
|
|
|
|
//
|
|
|
|
PageTable[PTIndex] = (PFAddress & ~((1ull << 21) - 1));
|
|
|
|
PageTable[PTIndex] |= (UINT64)IA32_PG_PS;
|
2015-11-30 20:57:40 +01:00
|
|
|
PageTable[PTIndex] |= (UINT64)PAGE_ATTRIBUTE_BITS;
|
2015-10-19 21:12:53 +02:00
|
|
|
if ((ErrorCode & IA32_PF_EC_ID) != 0) {
|
|
|
|
PageTable[PTIndex] &= ~IA32_PG_NX;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
//
|
|
|
|
// Small page
|
|
|
|
//
|
|
|
|
ASSERT (PageTable[PTIndex] != 0);
|
|
|
|
PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
|
|
|
|
|
|
|
|
//
|
|
|
|
// 4K PTE
|
|
|
|
//
|
|
|
|
PTIndex = (UINTN)BitFieldRead64 (PFAddress, 12, 20);
|
|
|
|
|
|
|
|
//
|
|
|
|
// Record old entries with non-present status
|
|
|
|
// Old entries include the memory which instruction is at and the memory which instruction access.
|
|
|
|
//
|
|
|
|
//
|
|
|
|
ASSERT (mPFEntryCount[CpuIndex] < MAX_PF_ENTRY_COUNT);
|
|
|
|
if (mPFEntryCount[CpuIndex] < MAX_PF_ENTRY_COUNT) {
|
|
|
|
PFIndex = mPFEntryCount[CpuIndex];
|
|
|
|
mLastPFEntryValue[CpuIndex][PFIndex] = PageTable[PTIndex];
|
|
|
|
mLastPFEntryPointer[CpuIndex][PFIndex] = &PageTable[PTIndex];
|
|
|
|
mPFEntryCount[CpuIndex]++;
|
|
|
|
}
|
|
|
|
|
|
|
|
//
|
|
|
|
// Set new entry
|
|
|
|
//
|
|
|
|
PageTable[PTIndex] = (PFAddress & ~((1ull << 12) - 1));
|
2015-11-30 20:57:40 +01:00
|
|
|
PageTable[PTIndex] |= (UINT64)PAGE_ATTRIBUTE_BITS;
|
2015-10-19 21:12:53 +02:00
|
|
|
if ((ErrorCode & IA32_PF_EC_ID) != 0) {
|
|
|
|
PageTable[PTIndex] &= ~IA32_PG_NX;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-20 05:35:58 +02:00
|
|
|
/**
|
|
|
|
Handler for Page Fault triggered by Guard page.
|
|
|
|
|
|
|
|
@param ErrorCode The Error code of exception.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
GuardPagePFHandler (
|
|
|
|
UINTN ErrorCode
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINT64 *PageTable;
|
|
|
|
UINT64 PFAddress;
|
|
|
|
UINT64 RestoreAddress;
|
|
|
|
UINTN RestorePageNumber;
|
|
|
|
UINTN CpuIndex;
|
|
|
|
|
|
|
|
PageTable = (UINT64 *)AsmReadCr3 ();
|
|
|
|
PFAddress = AsmReadCr2 ();
|
|
|
|
CpuIndex = GetCpuIndex ();
|
|
|
|
|
|
|
|
//
|
|
|
|
// Memory operation cross pages, like "rep mov" instruction, will cause
|
|
|
|
// infinite loop between this and Debug Trap handler. We have to make sure
|
|
|
|
// that current page and the page followed are both in PRESENT state.
|
|
|
|
//
|
|
|
|
RestorePageNumber = 2;
|
|
|
|
RestoreAddress = PFAddress;
|
|
|
|
while (RestorePageNumber > 0) {
|
|
|
|
RestorePageTableBelow4G (PageTable, RestoreAddress, CpuIndex, ErrorCode);
|
|
|
|
RestoreAddress += EFI_PAGE_SIZE;
|
|
|
|
RestorePageNumber--;
|
|
|
|
}
|
|
|
|
|
|
|
|
//
|
|
|
|
// Flush TLB
|
|
|
|
//
|
|
|
|
CpuFlushTlb ();
|
|
|
|
}
|
|
|
|
|
2015-10-19 21:12:53 +02:00
|
|
|
/**
|
|
|
|
The Page fault handler to save SMM profile data.
|
|
|
|
|
|
|
|
@param Rip The RIP when exception happens.
|
|
|
|
@param ErrorCode The Error code of exception.
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
SmmProfilePFHandler (
|
|
|
|
UINTN Rip,
|
|
|
|
UINTN ErrorCode
|
|
|
|
)
|
|
|
|
{
|
|
|
|
UINT64 *PageTable;
|
|
|
|
UINT64 PFAddress;
|
2018-02-06 03:08:25 +01:00
|
|
|
UINT64 RestoreAddress;
|
|
|
|
UINTN RestorePageNumber;
|
2015-10-19 21:12:53 +02:00
|
|
|
UINTN CpuIndex;
|
|
|
|
UINTN Index;
|
|
|
|
UINT64 InstructionAddress;
|
|
|
|
UINTN MaxEntryNumber;
|
|
|
|
UINTN CurrentEntryNumber;
|
|
|
|
BOOLEAN IsValidPFAddress;
|
|
|
|
SMM_PROFILE_ENTRY *SmmProfileEntry;
|
|
|
|
UINT64 SmiCommand;
|
|
|
|
EFI_STATUS Status;
|
|
|
|
UINT8 SoftSmiValue;
|
|
|
|
EFI_SMM_SAVE_STATE_IO_INFO IoInfo;
|
|
|
|
|
|
|
|
if (!mSmmProfileStart) {
|
|
|
|
//
|
|
|
|
// If SMM profile does not start, call original page fault handler.
|
|
|
|
//
|
|
|
|
SmiDefaultPFHandler ();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mBtsSupported) {
|
|
|
|
DisableBTS ();
|
|
|
|
}
|
|
|
|
|
|
|
|
IsValidPFAddress = FALSE;
|
|
|
|
PageTable = (UINT64 *)AsmReadCr3 ();
|
|
|
|
PFAddress = AsmReadCr2 ();
|
|
|
|
CpuIndex = GetCpuIndex ();
|
|
|
|
|
2018-02-06 03:08:25 +01:00
|
|
|
//
|
|
|
|
// Memory operation cross pages, like "rep mov" instruction, will cause
|
|
|
|
// infinite loop between this and Debug Trap handler. We have to make sure
|
|
|
|
// that current page and the page followed are both in PRESENT state.
|
|
|
|
//
|
|
|
|
RestorePageNumber = 2;
|
|
|
|
RestoreAddress = PFAddress;
|
|
|
|
while (RestorePageNumber > 0) {
|
|
|
|
if (RestoreAddress <= 0xFFFFFFFF) {
|
|
|
|
RestorePageTableBelow4G (PageTable, RestoreAddress, CpuIndex, ErrorCode);
|
|
|
|
} else {
|
|
|
|
RestorePageTableAbove4G (PageTable, RestoreAddress, CpuIndex, ErrorCode, &IsValidPFAddress);
|
|
|
|
}
|
|
|
|
RestoreAddress += EFI_PAGE_SIZE;
|
|
|
|
RestorePageNumber--;
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!IsValidPFAddress) {
|
|
|
|
InstructionAddress = Rip;
|
|
|
|
if ((ErrorCode & IA32_PF_EC_ID) != 0 && (mBtsSupported)) {
|
|
|
|
//
|
|
|
|
// If it is instruction fetch failure, get the correct IP from BTS.
|
|
|
|
//
|
|
|
|
InstructionAddress = GetSourceFromDestinationOnBts (CpuIndex, Rip);
|
|
|
|
if (InstructionAddress == 0) {
|
|
|
|
//
|
|
|
|
// It indicates the instruction which caused page fault is not a jump instruction,
|
|
|
|
// set instruction address same as the page fault address.
|
|
|
|
//
|
|
|
|
InstructionAddress = PFAddress;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
//
|
|
|
|
// Indicate it is not software SMI
|
|
|
|
//
|
|
|
|
SmiCommand = 0xFFFFFFFFFFFFFFFFULL;
|
|
|
|
for (Index = 0; Index < gSmst->NumberOfCpus; Index++) {
|
|
|
|
Status = SmmReadSaveState(&mSmmCpu, sizeof(IoInfo), EFI_SMM_SAVE_STATE_REGISTER_IO, Index, &IoInfo);
|
|
|
|
if (EFI_ERROR (Status)) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (IoInfo.IoPort == mSmiCommandPort) {
|
|
|
|
//
|
|
|
|
// A software SMI triggered by SMI command port has been found, get SmiCommand from SMI command port.
|
|
|
|
//
|
|
|
|
SoftSmiValue = IoRead8 (mSmiCommandPort);
|
|
|
|
SmiCommand = (UINT64)SoftSmiValue;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
SmmProfileEntry = (SMM_PROFILE_ENTRY *)(UINTN)(mSmmProfileBase + 1);
|
|
|
|
//
|
|
|
|
// Check if there is already a same entry in profile data.
|
|
|
|
//
|
|
|
|
for (Index = 0; Index < (UINTN) mSmmProfileBase->CurDataEntries; Index++) {
|
|
|
|
if ((SmmProfileEntry[Index].ErrorCode == (UINT64)ErrorCode) &&
|
|
|
|
(SmmProfileEntry[Index].Address == PFAddress) &&
|
|
|
|
(SmmProfileEntry[Index].CpuNum == (UINT64)CpuIndex) &&
|
|
|
|
(SmmProfileEntry[Index].Instruction == InstructionAddress) &&
|
|
|
|
(SmmProfileEntry[Index].SmiCmd == SmiCommand)) {
|
|
|
|
//
|
|
|
|
// Same record exist, need not save again.
|
|
|
|
//
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (Index == mSmmProfileBase->CurDataEntries) {
|
|
|
|
CurrentEntryNumber = (UINTN) mSmmProfileBase->CurDataEntries;
|
|
|
|
MaxEntryNumber = (UINTN) mSmmProfileBase->MaxDataEntries;
|
|
|
|
if (FeaturePcdGet (PcdCpuSmmProfileRingBuffer)) {
|
|
|
|
CurrentEntryNumber = CurrentEntryNumber % MaxEntryNumber;
|
|
|
|
}
|
|
|
|
if (CurrentEntryNumber < MaxEntryNumber) {
|
|
|
|
//
|
|
|
|
// Log the new entry
|
|
|
|
//
|
|
|
|
SmmProfileEntry[CurrentEntryNumber].SmiNum = mSmmProfileBase->NumSmis;
|
|
|
|
SmmProfileEntry[CurrentEntryNumber].ErrorCode = (UINT64)ErrorCode;
|
|
|
|
SmmProfileEntry[CurrentEntryNumber].ApicId = (UINT64)GetApicId ();
|
|
|
|
SmmProfileEntry[CurrentEntryNumber].CpuNum = (UINT64)CpuIndex;
|
|
|
|
SmmProfileEntry[CurrentEntryNumber].Address = PFAddress;
|
|
|
|
SmmProfileEntry[CurrentEntryNumber].Instruction = InstructionAddress;
|
|
|
|
SmmProfileEntry[CurrentEntryNumber].SmiCmd = SmiCommand;
|
|
|
|
//
|
|
|
|
// Update current entry index and data size in the header.
|
|
|
|
//
|
|
|
|
mSmmProfileBase->CurDataEntries++;
|
|
|
|
mSmmProfileBase->CurDataSize = MultU64x64 (mSmmProfileBase->CurDataEntries, sizeof (SMM_PROFILE_ENTRY));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
//
|
|
|
|
// Flush TLB
|
|
|
|
//
|
|
|
|
CpuFlushTlb ();
|
|
|
|
|
|
|
|
if (mBtsSupported) {
|
|
|
|
EnableBTS ();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
Replace INT1 exception handler to restore page table to absent/execute-disable state
|
|
|
|
in order to trigger page fault again to save SMM profile data..
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
InitIdtr (
|
|
|
|
VOID
|
|
|
|
)
|
|
|
|
{
|
2016-11-16 15:25:56 +01:00
|
|
|
EFI_STATUS Status;
|
|
|
|
|
|
|
|
Status = SmmRegisterExceptionHandler (&mSmmCpuService, EXCEPT_IA32_DEBUG, DebugExceptionHandler);
|
|
|
|
ASSERT_EFI_ERROR (Status);
|
2015-10-19 21:12:53 +02:00
|
|
|
}
|