BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1829
There will be ASSERT if LMCE is supported as below.
DXE_ASSERT!: [CpuFeaturesDxe]
XXX\UefiCpuPkg\Library\CpuCommonFeaturesLib\MachineCheck.c (342):
ConfigData != ((void *) 0)
The code should get Config Data and FeatureControlGetConfigData
could be used.
This issue is there since the code was added at the commit below.
Revision: 3d6275c113
Date: 2017/8/4 8:46:41
UefiCpuPkg CpuCommonFeaturesLib: Enable LMCE feature.
The commits below are also related to move the code.
Revision: 0233871442
Date: 2017/9/1 10:12:38
UefiCpuPkg/Lmce.c Remove useless file.
Revision: 306a5bcc6b
Date: 2017/8/17 11:40:38
UefiCpuPkg/CpuCommonFeaturesLib: Merge machine check code to same file.
So, the code may not be tested at all on a platform
that supports LMCE.
BTW: A typo in LmceInitialize is also fixed.
The typo is introduced by the commit below.
Revision: d28daaddb3
Date: 2018/10/17 9:24:05
UefiCpuPkg/CpuCommonFeaturesLib: Register MSR base on scope Info.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Chandana Kumar <chandana.c.kumar@intel.com>
Cc: Kevin Li <kevin.y.li@intel.com>
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1808
In current code, the values of TopaEntryPtr->Uint64 for TopaTable
and the values of OutputBaseReg.Uint64 and OutputMaskPtrsReg.Uint64
to register table write for RTIT_OUTPUT_BASE and RTIT_OUTPUT_MASK_PTRS
are not been initialized in whole. For example, the reserved bits in
OutputBaseReg.Uint64 are random that will cause GP fault like below
when SetProcessorRegister (in CpuFeaturesInitialize.c) sets register
based on register table.
!!!! X64 Exception Type - 0D(#GP - General Protection)
CPU Apic ID - 00000000 !!!!
ExceptionData - 0000000000000000
RIP -0000000064D69576, CS -0000000000000038, RFLAGS -0000000000010246
RAX -000000006B9F1001, RCX -0000000000000560, RDX -0000000000000000
RBX -0000000064EECA18, RSP -000000006CB82BA0, RBP -0000000000000008
RSI -0000000080000000, RDI -0000000000000011
R8 -000000006B9493D0, R9 -0000000000000010, R10 -00000000000000FF
R11 -000000006CB82A50, R12 -0000000064D70F50, R13 -0000000066547050
R14 -0000000064E3E198, R15 -0000000000000000
DS -0000000000000030, ES -0000000000000030, FS -0000000000000030
GS -0000000000000030, SS -0000000000000030
CR0 -0000000080010013, CR2 -0000000000000000, CR3 -000000006C601000
CR4 -0000000000000628, CR8 -0000000000000000
DR0 -0000000000000000, DR1 -0000000000000000, DR2 -0000000000000000
DR3 -0000000000000000, DR6 -00000000FFFF0FF0, DR7 -0000000000000400
GDTR -000000006B8CCF18 0000000000000047, LDTR -0000000000000000
IDTR -000000006687E018 0000000000000FFF, TR -0000000000000000
FXSAVE_STATE -000000006CB82800
And current code gets MSR_IA32_RTIT_CTL, MSR_IA32_RTIT_OUTPUT_BASE and
MSR_IA32_RTIT_OUTPUT_MASK_PTRS in ProcTraceInitialize() and uses their
values for all processors. But ProcTraceInitialize() is only executed
by BSP, that means the values just for BSP. For good practice, the code
should get MSR_IA32_RTIT_CTL, MSR_IA32_RTIT_OUTPUT_BASE and
MSR_IA32_RTIT_OUTPUT_MASK_PTRS in ProcTraceSupport (executed by all
processors), and then use them in ProcTraceInitialize() for all
processors. This can also resolve the issue that the values of
OutputBaseReg.Uint64 and OutputMaskPtrsReg.Uint64 are not been
initialized in whole.
For TopaEntryPtr->Uint64, this patch updates code to initialize it
in whole explicitly by TopaEntryPtr->Uint64 = 0 before updating its
fields.
At the same time, this patch also eliminates the ProcTraceSupported
field in PROC_TRACE_PROCESSOR_DATA and the TopaMemArrayCount field in
PROC_TRACE_DATA.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Chandana Kumar <chandana.c.kumar@intel.com>
Cc: Kevin Li <kevin.y.li@intel.com>
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1809
Current code disables TraceEn at the end of ProcTraceInitialize(),
then there will be much memory allocated even when ProcTrace feature
is disabled.
This patch updates code to disable TraceEn and return at the beginning
of ProcTraceInitialize() when when ProcTrace feature is disabled.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Chandana Kumar <chandana.c.kumar@intel.com>
Cc: Kevin Li <kevin.y.li@intel.com>
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1679
The checking to CpuInfo->CpuIdVersionInfoEcx.Bits.AESNI is enough,
the checking to CPU generation could be removed, then the code
could be reused by more platforms.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Chandana Kumar <chandana.c.kumar@intel.com>
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1136
CVE: CVE-2018-12182
Customer met system hang-up during serial port loopback test in OS.
It is a corner case happened with one CPU core doing "out dx,al" and
another CPU core(s) doing "rep outs dx,byte ptr [rsi]".
Detailed code flow is as below.
1. Serial port loopback test in OS.
One CPU core: "out dx,al" -> Writing B2h, SMI will happen.
Another CPU core(s): "rep outs dx,byte ptr [rsi]".
2. SMI happens to enter SMM.
"out dx" (SMM_IO_TYPE_OUT_DX) is saved as I/O instruction type in
SMRAM save state for CPU doing "out dx,al".
"rep outs dx" (SMM_IO_TYPE_REP_OUTS) is saved as I/O instruction
type and rsi is save as I/O Memory Address in SMRAM save state for
CPU doing "rep outs dx, byte ptr [rsi]".
NOTE: I/O Memory Address (rsi) is a virtual address mapped by
OS/Virtual Machine.
3. Some SMM code calls EFI_SMM_CPU_PROTOCOL.ReadSaveState() with
EFI_SMM_SAVE_STATE_REGISTER_IO and parse data returned.
For example:
https://github.com/tianocore/edk2/blob/master/QuarkSocPkg/
QuarkNorthCluster/Smm/DxeSmm/QncSmmDispatcher/QNC/QNCSmmSw.c#L76
4. SmmReadSaveState() is executed to read save state for
EFI_SMM_SAVE_STATE_REGISTER_IO.
- The SmmReadSaveState() function in
"UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c" calls the
SmmCpuFeaturesReadSaveStateRegister() function, from the platform's
SmmCpuFeaturesLib instance.
- If that platform-specific function returns EFI_UNSUPPORTED, then
PiSmmCpuDxeSmm falls back to the common function
ReadSaveStateRegister(), defined in file
"UefiCpuPkg/PiSmmCpuDxeSmm/SmramSaveState.c".
Current ReadSaveStateRegister() in
UefiCpuPkg/PiSmmCpuDxeSmm/SmramSaveState.c is trying to copy data
from I/O Memory Address for EFI_SMM_SAVE_STATE_IO_TYPE_REP_PREFIX,
PF will happen as SMM page table does not know and cover this
OS/Virtual Machine virtual address.
Same case is for SmmCpuFeaturesReadSaveStateRegister() in platform-
specific SmmCpuFeaturesLib instance if it has similar implementation
to read save state for EFI_SMM_SAVE_STATE_REGISTER_IO with
EFI_SMM_SAVE_STATE_IO_TYPE_REP_PREFIX.
Same case is for "ins", 'outs' and 'rep ins'.
So to fix the problem, this patch updates the code to only support
IN/OUT, but not INS/OUTS/REP INS/REP OUTS for SmmReadSaveState().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
These files have \r\n line endings, but a few lines use \r\r\n which
is not a valid line ending. These lines were causing problems for git
and other tools.
Signed-off-by: Joe Richey <joerichey@google.com>
Review-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1464
Currently Framework compatibility support is not needed and
PcdFrameworkCompatibilitySupport will be removed from edk2.
So remove the usage of this PCD firstly.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
The CpuMpPei module uses a services from the CpuLib class,
but the CpuLib class is missing from the INF file. This
update is required to use the new MpInitLibUp instance that
does not use the CpuLib class.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Add a new instance of the MpInitLib that is designed for
uniprocessor platforms that require the use of modules
that depend on the MP_SERVICES_PROTOCOL for dispatch
or to retrieve information about the boot processor.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reserved6 is changed to Reserved7 because the bit width is changed.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
GetProcessorLocation2ByApicId() extracts the
package/die/tile/module/core/thread ID from the initial APIC ID.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Zhiqiang Qin <zhiqiang.qin@intel.com>
Leaf 1FH is very similar to leaf 0BH. Both return the CPU topology
information.
Leaf 0BH returns 3-level (Package/Core/Thread) CPU topology info.
Leaf 1FH returns 6-level (Package/Die/Tile/Module/Core/Thread) CPU
topology info.
The logic to enumerate the topology info is the same.
But today's logic to handle 1FH is completely wrong.
The patch combines them together to fix the 1FH issue.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Zhiqiang Qin <zhiqiang.qin@intel.com>
Per SDM CPUID.0BH and CPUID.1FH outputs the same format of data in
EAX/EBX/ECX/EDX except CPUID.1FH reports more level types such as
module, tile, die.
The patch removes the unnecessary duplicated structure definitions
for CPUID.1FH because when the structure definitions for CPUID.0BH
can be used for CPUID.1FH.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Zhiqiang Qin <zhiqiang.qin@intel.com>
Cc: Ray Ni <Ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
PcdCpuFeaturesSupport used to specify the platform policy about
what CPU features this platform supports. This PCD will be used
in IsCpuFeatureSupported only.
Now RegisterCpuFeaturesLib use this PCD as an template to Get the
pcd size. Update the code logic to replace it with
PcdCpuFeaturesSetting.
BZ:https://bugzilla.tianocore.org/show_bug.cgi?id=1375
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
PcdCpuFeaturesUserConfiguration.
Merge PcdCpuFeaturesUserConfiguration into PcdCpuFeaturesSetting.
Use PcdCpuFeaturesSetting as input for the user input feature setting
Use PcdCpuFeaturesSetting as output for the final CPU feature setting
BZ:https://bugzilla.tianocore.org/show_bug.cgi?id=1375
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1593
For every SMI occurrence, save and restore CR2 register only when SMM
on-demand paging support is enabled in 64 bit operation mode.
This is not a bug but to have better improvement of code.
Patch5 is updated with separate functions for Save and Restore of CR2
based on review feedback.
Patch6 - Removed Global Cr2 instead used function parameter.
Patch7 - Removed checking Cr2 with 0 as per feedback.
Patch8 and 9 - Aligned with EDK2 Coding style.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Vanguput Narendra K <narendra.k.vanguput@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Yao Jiewen <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Nate DeSimone <nathaniel.l.desimone@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
V2 changes:
Update the commit message and comments in the code.
When waking vector buffer allocated by CpuDxe is tested by MemTest86
in MP mode, an error is reported because the same range of memory is
modified by both CpuDxe driver and MemTest86.
The waking vector buffer is not expected to be tested by MemTest86 if
it is allocated out because MemTest86 only tests free memory. But
current CpuDxe driver "borrows" buffer instead of allocate buffer for
waking vector buffer (through allocate & free to get the buffer
pointer, backup the buffer data before using it and restore it after
using). With this implementation, if the buffer borrowed is not used
by any other drivers, MemTest86 tool will treat it as free memory
and test it.
In order to fix the above issue, CpuDxe changes to allocate the
buffer below 1M instead of borrowing it. But directly allocating
memory below 1MB causes LegacyBios driver fails to start. LegacyBios
driver allocates memory range from
"0xA0000 - PcdEbdaReservedMemorySize" to 0xA0000 as Ebda Reserved
Memory. The minimum value for "0xA0000 - PcdEbdaReservedMemorySize"
is 0x88000. If LegacyBios driver allocate this range failed, it
asserts.
LegacyBios also reserves range from 0x60000 to
"0x60000 + PcdOpromReservedMemorySize", it will be used as Oprom
Reserve Memory. The maximum value for "0x60000 +
PcdOpromReservedMemorySize" is 0x88000. LegacyBios driver tries to
allocate these range page(4K size) by page. It just reports warning
message if some pages are already allocated by others.
Base on above investigation, one page in range 0x60000 ~ 0x88000 can
be used as the waking vector buffer.
LegacyBios driver only reports warning when page allocation in range
[0x60000, 0x88000) fails. This library is consumed by CpuDxe driver
to produce CPU Arch protocol. LagacyBios driver depends on CPU Arch
protocol which guarantees below allocation runs earlier than
LegacyBios driver.
Cc: Ray Ni <ray.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
.nasm file has been added for X86 arch. .S assembly code
is not required any more.
https://bugzilla.tianocore.org/show_bug.cgi?id=1594
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
.nasm file has been added for X86 arch. .S assembly code
is not required any more.
https://bugzilla.tianocore.org/show_bug.cgi?id=1594
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
.nasm file has been added for X86 arch. .S assembly code
is not required any more.
https://bugzilla.tianocore.org/show_bug.cgi?id=1594
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The prompt and help information are missing in UefiPkg.uni.
https://bugzilla.tianocore.org/show_bug.cgi?id=1600
v3:The changes in v1 are duplicated. So update the info.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1621
According to Intel SDM as below, the BIT0 should be treated as
lock bit, and BIT1 should be treated as disable(1)/enable(0) bit.
"11b: AES instructions are not available until next
RESET.
Otherwise, AES instructions are available.
If the configuration is not 01b, AES
instructions can be mis-configured if a privileged agent
unintentionally writes 11b"
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Chandana Kumar <chandana.c.kumar@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1020
Should make sure the TotalSize of Microcode is aligned with 4 bytes
before calling CalculateSum32 function.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chen A Chen <chen.a.chen@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1020
The Microcode region indicated by MicrocodePatchAddress PCD may contain
more than one Microcode entry. We should save InCompleteCheckSum32 value
for each payload. Move the logic for calculate InCompleteCheckSum32 from
the outsize of the do-while loop to the inside.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chen A Chen <chen.a.chen@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1576
The root cause of this issue is that non-stop mode of Heap Guard and
NULL Detection set TF bit (single-step) in EFLAG unconditionally in
the common handler in CpuExceptionLib.
If PcdCpuSmmStaticPageTable is FALSE, the SMM will only create page
table for memory below 4G. If SMM tries to access memory beyond 4G,
a page fault exception will be triggered and the memory to access
will be added to page table so that SMM code can continue the access.
Because of above issue, the TF bit is set after the page fault is
handled and then fall into another DEBUG exception. Since non-stop
mode of Heap Guard and NULL Detection are not enabled, no special
DEBUG exception handler is registered. The default handler just
prints exception context and go into dead loop.
Actually EFLAGS can be changed in any standard exception handler.
There's no need to do single-step setup in assembly code. So the fix
is to move the logic to C code part of page fault exception handler
so that we can fully validate the configuration and prevent TF bit
from being set unexpectedly.
Fixes: dcc026217f16b918bbaf
Test:
- Pass special test of accessing memory beyond 4G in SMM mode
- Boot to OS with Qemu emulator platform (Fedora27, Ubuntu18.04,
Windows7, Windows10)
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1521
We scan the SMM code with ROPgadget.
http://shell-storm.org/project/ROPgadget/https://github.com/JonathanSalwan/ROPgadget/tree/master
This tool reports the gadget in SMM driver.
This patch enabled CET ShadowStack for X86 SMM.
If CET is supported, SMM will enable CET ShadowStack.
SMM CET will save the OS CET context at SmmEntry and
restore OS CET context at SmmExit.
Test:
1) test Intel internal platform (x64 only, CET enabled/disabled)
Boot test:
CET supported or not supported CPU
on CET supported platform
CET enabled/disabled
PcdCpuSmmCetEnable enabled/disabled
Single core/Multiple core
PcdCpuSmmStackGuard enabled/disabled
PcdCpuSmmProfileEnable enabled/disabled
PcdCpuSmmStaticPageTable enabled/disabled
CET exception test:
#CF generated with PcdCpuSmmStackGuard enabled/disabled.
Other exception test:
#PF for normal stack overflow
#PF for NX protection
#PF for RO protection
CET env test:
Launch SMM in CET enabled/disabled environment (DXE) - no impact to DXE
The test case can be found at
https://github.com/jyao1/SecurityEx/tree/master/ControlFlowPkg
2) test ovmf (both IA32 and X64 SMM, CET disabled only)
test OvmfIa32/Ovmf3264, with -D SMM_REQUIRE.
qemu-system-x86_64.exe -machine q35,smm=on -smp 4
-serial file:serial.log
-drive if=pflash,format=raw,unit=0,file=OVMF_CODE.fd,readonly=on
-drive if=pflash,format=raw,unit=1,file=OVMF_VARS.fd
QEMU emulator version 3.1.0 (v3.1.0-11736-g7a30e7adb0-dirty)
3) not tested
IA32 CET enabled platform
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yao Jiewen <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1521
Add information dump for Control Protection exception.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yao Jiewen <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1020
The following Microcode payload format is define in SDM spec.
Payload: |MicrocodeHeader|MicrocodeBinary|ExtendedHeader|ExtendedTable|.
When we verify the CheckSum32 with ExtendedTable, we should use the fields
of ExtendedTable to replace corresponding fields in MicrocodeHeader,
and then calculate the CheckSum32 with MicrocodeHeader+MicrocodeBinary.
This patch already verified on ICL platform.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chen A Chen <chen.a.chen@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Zhang Chao B <chao.b.zhang@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1533
When SecCore and PeiCore in different FV, current
implementation still assuming SecCore and PeiCore are in
the same FV.
To fix this issue 2 FVs will be input parameters for
FindAndReportEntryPoints () and SecCore and PeiCore will
be found in each FV and correct debug information will
be reported.
Test: Booted with internal platform successfully.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chasel Chiu <chasel.chiu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1481
Today's MtrrLib contains a bug, for example:
when the original cache setting is WB for [0xF_0000, 0xF_8000) and,
a new request to set [0xF_0000, 0xF_4000) to WP,
the cache setting for [0xF_4000, 0xF_8000) is reset to UC.
The reason is when MtrrLibSetBelow1MBMemoryAttribute() is called the
WorkingFixedSettings doesn't contain the actual MSR value stored in
hardware, but when writing the fixed MTRRs, the code logic assumes
WorkingFixedSettings contains the actual MSR value.
The new fix is to change MtrrLibSetBelow1MBMemoryAttribute() to
calculate the correct ClearMasks[] and OrMasks[], and use them
directly when writing the fixed MTRRs.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Chasel Chiu <chasel.chiu@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1524
Previous commit 373c2c5b88,
missed one comment change that should be fixed.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chasel Chiu <chasel.chiu@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1524
EFI_PEI_CORE_FV_LOCATION_PPI may be passed by platform
when PeiCore not in BFV so SecCore has to search PeiCore
either from the FV location provided by
EFI_PEI_CORE_FV_LOCATION_PPI or from BFV.
Test: Verified on internal platform and booting successfully.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chasel Chiu <chasel.chiu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
In AcquireSpinLock function, it may call GetPerformanceCounter which
final calls PeiService table. This code may also been used by AP but
AP should not calls PeiService. This patch update code to avoid use
AcquireSpinLock function.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1411
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
((Facs->Flags & EFI_ACPI_4_0_OSPM_64BIT_WAKE__F) != 0))
In above code, Facs->OspmFlags should be used instead.
EFI_ACPI_4_0_OSPM_64BIT_WAKE__F is a bit in OSPM Enabled Firmware
Control Structure Flags field, not in Firmware Control Structure
Feature Flags.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1430
Cc: Aleksiy <oleksiyy@ami.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
V3:
Define union to specify the ppi or protocol.
V2:
1. Initialize CpuFeaturesData->MpService in CpuInitDataInitialize
and make this function been called at the begin of the
initialization.
2. let all other functions use CpuFeaturesData->MpService install
of locate the protocol itself.
V1:
GetProcessorIndex function calls GetMpPpi to get the MP Ppi.
Ap will calls GetProcessorIndex function which final let AP calls
PeiService.
This patch avoid GetProcessorIndex call PeiService.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1411
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Enhance debug message format to let them easy to read.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1411
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1091
Previously, when compiling NASM source files, BaseTools did not support
including files outside of the NASM source file directory. As a result, we
duplicated multiple copies of "StuffRsb.inc" files in UefiCpuPkg. Those
INC files contain the common logic to stuff the Return Stack Buffer and
are identical.
After the fix of BZ 1085:
https://bugzilla.tianocore.org/show_bug.cgi?id=1085
The above support was introduced.
Thus, this commit will merge all the StuffRsb.inc files in UefiCpuPkg into
one file. The merged file will be named 'StuffRsbNasm.inc' and be placed
under folder UefiCpuPkg/Include/.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1417
Since BaseLib API AsmLfence() is a x86 arch specific API and should be
avoided using in generic codes, this commit replaces the usage of
AsmLfence() with arch-generic API SpeculationBarrier().
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
*Excpetion* should be *Exception*
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Mike Maslenkin <mike.maslenkin@gmail.com>
CC: Eric Dong <eric.dong@intel.com>
CC: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1305
The patch reverts commit 1ed6498c4a
* UefiCpuPkg/CommonFeature: Skip locking when the feature is disabled
FEATURE_CONTROL.Lock bit is controlled by feature
CPU_FEATURE_LOCK_FEATURE_CONTROL_REGISTER. The commit 1ed649 fixes
a bug that when the feature is disabled, the Lock bit is cleared.
But it's a security hole if the bit is cleared when booting OS.
We can argue that platform needs to make sure the value
of PcdCpuFeaturesUserConfiguration should be set properly to make
sure feature CPU_FEATURE_LOCK_FEATURE_CONTROL_REGISTER is enabled.
But it's better to guarantee this in the generic core code.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
In current implementation, core and package level sync uses same semaphores.
Sharing the semaphore may cause wrong execution order.
For example:
1. Feature A has CPU_FEATURE_CORE_BEFORE dependency with Feature B.
2. Feature C has CPU_FEATURE_PACKAGE_AFTER dependency with Feature B.
The expected feature initialization order is A B C:
A ---- (Core Depends) ----> B ---- (Package Depends) ----> C
For a CPU has 1 package, 2 cores and 4 threads. The feature initialization
order may like below:
Thread#1 Thread#2 Thread#3 Thread#4
[A.Init] [A.Init] [A.Init]
Release(S1, S2) Release(S1, S2) Release(S3, S4)
Wait(S1) * 2 Wait(S2) * 2 <------------------------------- Core sync
[B.Init] [B.Init]
Release (S1,S2,S3,S4)
Wait (S1) * 4 <----------------------------------------------------- Package sync
Wait(S4 * 2) <- Core sync
[B.Init]
In above case, for thread#4, when it syncs in core level, Wait(S4) * 2 isn't
blocked and [B.Init] runs. But [A.Init] hasn't run in thread#3. It's wrong!
Thread#4 should execute [B.Init] after thread#3 executes [A.Init] because B
core level depends on A.
The reason of the wrong execution order is that S4 is released in thread#1
by calling Release (S1, S2, S3, S4) and in thread #4 by calling
Release (S3, S4).
To fix this issue, core level sync and package level sync should use separate
semaphores.
In above example, the S4 released in Release (S1, S2, S3, S4) should not be the
same semaphore as that in Release (S3, S4).
Related BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1311
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
In current implementation, core and package level sync uses same semaphores.
Sharing the semaphore may cause wrong execution order.
For example:
1. Feature A has CPU_FEATURE_CORE_BEFORE dependency with Feature B.
2. Feature C has CPU_FEATURE_PACKAGE_AFTER dependency with Feature B.
The expected feature initialization order is A B C:
A ---- (Core Depends) ----> B ---- (Package Depends) ----> C
For a CPU has 1 package, 2 cores and 4 threads. The feature initialization
order may like below:
Thread#1 Thread#2 Thread#3 Thread#4
[A.Init] [A.Init] [A.Init]
Release(S1, S2) Release(S1, S2) Release(S3, S4)
Wait(S1) * 2 Wait(S2) * 2 <------------------------------- Core sync
[B.Init] [B.Init]
Release (S1,S2,S3,S4)
Wait (S1) * 4 <----------------------------------------------------- Package sync
Wait(S4 * 2) <- Core sync
[B.Init]
In above case, for thread#4, when it syncs in core level, Wait(S4) * 2 isn't
blocked and [B.Init] runs. But [A.Init] hasn't run in thread#3. It's wrong!
Thread#4 should execute [B.Init] after thread#3 executes [A.Init] because B
core level depends on A.
The reason of the wrong execution order is that S4 is released in thread#1
by calling Release (S1, S2, S3, S4) and in thread #4 by calling
Release (S3, S4).
To fix this issue, core level sync and package level sync should use separate
semaphores.
In above example, the S4 released in Release (S1, S2, S3, S4) should not be the
same semaphore as that in Release (S3, S4).
Related BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1311
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
V2 changes:
V1 change has regression which caused by change feature order.
V2 changes logic to detect dependence not only for the
neighborhood features. It need to check all features in the list.
V1 Changes:
In current code logic, only adjust feature position if current
CPU feature position not follow the request order. Just like
Feature A need to be executed before feature B, but current
feature A registers after feature B. So code will adjust the
position for feature A, move it to just before feature B. If
the position already met the requirement, code will not adjust
the position.
This logic has issue when met all below cases:
1. feature A has core or package level dependence with feature B.
2. feature A is register before feature B.
3. Also exist other features exist between feature A and B.
Root cause is driver ignores the dependence for this case, so
threads may execute not follow the dependence order.
Fix this issue by change code logic to adjust feature position
for CPU features which has dependence relationship.
Related BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1311
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <Ruiyu.ni@intel.com>
When static paging is disabled, page table for below 4GB is created
and page table for above 4GB is created dynamically in page fault
handler.
Today's implementation only allow SMM access-out to below types of
memory address no matter static paging is enabled or not:
1. Reserved, run time and ACPI NVS type
2. MMIO
But certain platform feature like RAS may need to access other types
of memory from SMM. Today's code blocks these platforms.
This patch simplifies the policy to only block when static paging
is used so that the static paging can be disabled in these platforms
to meet their SMM access-out need.
Setting PcdCpuSmmStaticPageTable to FALSE can disable the static
paging.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1305
Today's code unconditionally sets the IA32_FEATURE_CONTROL.Lock to 1
no matter the feature is enabled or not.
The patch fixes this issue by only setting the Lock bit to 1 when
the feature is enabled.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
In some special cases, after BSP sends Init-sipi-sipi signal
AP needs more time to start the Ap procedure. In this case
BSP may think AP has finished its task but in fact AP hasn't began
yet.
Rollback former change to keep the status which only be used
when AP really finished task.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Build UefiCpuPkg with below configuration:
Architecture(s) = IA32
Build target = NOOPT
Toolchain = VS2015x86
Below error info shows up:
DxeRegisterCpuFeaturesLib.lib(CpuFeaturesInitialize.obj) :
error LNK2001: unresolved external symbol __allmul
Valid mDependTypeStr type only have 5 items, use UINT32 type cast
to fix this error.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Index is initialized to MAX_UINT16 as default failure value, which
is what the ASSERT is supposed to test for. The ASSERT condition
however can never return FALSE for INT16 != int, as due to
Integer Promotion[1], Index is converted to int, which can never
result in -1.
Furthermore, Index is used as a for loop index variable inbetween its
initialization and the ASSERT, so the value is unconditionally
overwritten too.
Fix the ASSERT check to compare Index to its upper boundary, which it
will be equal to if the loop was not broken out of on success.
[1] ISO/IEC 9899:2011, 6.5.9.4
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Index is initialized to MAX_UINT16 as default failure value, which
is what the ASSERT is supposed to test for. The ASSERT condition
however can never return FALSE for INT16 != int, as due to
Integer Promotion[1], Index is converted to int, which can never
result in -1.
Furthermore, Index is used as a for loop index variable inbetween its
initialization and the ASSERT, so the value is unconditionally
overwritten too.
Fix the ASSERT check to compare Index to its upper boundary, which it
will be equal to if the loop was not broken out of on success.
[1] ISO/IEC 9899:2011, 6.5.9.4
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Current code assume only one dependence (before or after) for one
feature. Enhance code logic to support feature has two dependence
(before and after) type.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Code initialized in function can't be correctly detected by build tool.
Add code to clearly initialize the local variable before use it.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Remove extra white space at the end of line.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Changes include:
1. Remove extra white space at the end of line.
2. Add comments for the new add function parameter.
3. Update IN OUT tag for function parameter.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Remove extra white space at the end of line.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Code initialized in function can't be correctly detected by build tool.
Add code to clearly initialize the local variable before use it.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
The freed-memory guard feature will cause a recursive calling
of InitializePageTablePool(). This is due to a fact that
AllocateAlignedPages() is used to allocate page table pool memory.
This function will most likely call gBS->FreePages to free unaligned
pages and then cause another round of page attributes change, like
below (freed pages will be always marked not-present if freed-memory
guard is enabled)
FreePages() <===============|
=> CpuSetMemoryAttributes() |
=> <if out of page table> |
=> InitializePageTablePool() |
=> AllocateAlignedPages() |
=> FreePages() ================|
The solution is add a global variable as a lock in page table pool
allocation function and fail any other requests if it has not been
done.
Since this issue will only happen if free-memory guard is enabled,
it won't affect CpuSetMemoryAttributes() in default build of a BIOS.
If free-memory guard is enabled, it only affect the pages
(failed to mark them as not-present) freed in AllocateAlignedPages().
Since those freed pages haven't been used yet (their addresses not
yet exposed to code outside AllocateAlignedPages), it won't compromise
the freed-memory guard feature.
This change will just fail the CpuSetMemoryAttributes() called from
FreePages() but it won't fail the FreePages(). So the error status
won't be propagated upper layer of code.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Non-stop mode was introduced / explained in commit 8f2613628a
("MdeModulePkg/MdeModulePkg.dec: add new settings for PCDs",
2018-08-30).
The macro HEAP_GUARD_NONSTOP_MODE was added to CpuDxe in commit
dcc026217f ("UefiCpuPkg/CpuDxe: implement non-stop mode for uefi",
2018-08-30).
Another instance of the macro HEAP_GUARD_NONSTOP_MODE was added to
PiSmmCpuDxeSmm -- with BIT1|BIT0 replaced with BIT3|BIT2 -- in commit
09afd9a42a ("UefiCpuPkg/PiSmmCpuDxeSmm: implement non-stop mode for
SMM", 2018-08-30)
Since the freed-memory guard is for UEFI-only. This patch only updates
HEAP_GUARD_NONSTOP_MODE in "UefiCpuPkg/CpuDxe/CpuDxe.h" (add BIT4).
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Because MSR has scope attribute, driver has no needs to set
MSR for all APs if MSR scope is core or package type. This patch
updates code to base on the MSR scope value to add MSR to the register
table.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
V4 changes:
1. Serial console log for different threads when program register table.
2. Check the AcpiCpuData before use it to avoid potential ASSERT.
V3 changes:
1. Use global variable instead of internal function to return string for register type
and dependence type.
2. Add comments for some complicated logic.
V1 changes:
Because this driver needs to set MSRs saved in normal boot phase, sync
semaphore logic from RegisterCpuFeaturesLib code which used for normal boot phase.
Detail see below change for RegisterCpuFeaturesLib:
UefiCpuPkg/RegisterCpuFeaturesLib: Add logic to support semaphore type.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
V4 changes include:
1. Serial debug message for different threads when program the register table.
V3 changes include:
1. Use global variable instead of internal function to return string for register type
and dependence type.
2. Add comments for some complicated logic.
V2 changes include:
1. Add more description for the code part which need easy to understand.
2. Refine some code base on feedback for V1 changes.
V1 changes include:
In a system which has multiple cores, current set register value task costs huge times.
After investigation, current set MSR task costs most of the times. Current logic uses
SpinLock to let set MSR task as an single thread task for all cores. Because MSR has
scope attribute which may cause GP fault if multiple APs set MSR at the same time,
current logic use an easiest solution (use SpinLock) to avoid this issue, but it will
cost huge times.
In order to fix this performance issue, new solution will set MSRs base on their scope
attribute. After this, the SpinLock will not needed. Without SpinLock, new issue raised
which is caused by MSR dependence. For example, MSR A depends on MSR B which means MSR A
must been set after MSR B has been set. Also MSR B is package scope level and MSR A is
thread scope level. If system has multiple threads, Thread 1 needs to set the thread level
MSRs and thread 2 needs to set thread and package level MSRs. Set MSRs task for thread 1
and thread 2 like below:
Thread 1 Thread 2
MSR B N Y
MSR A Y Y
If driver don't control execute MSR order, for thread 1, it will execute MSR A first, but
at this time, MSR B not been executed yet by thread 2. system may trig exception at this
time.
In order to fix the above issue, driver introduces semaphore logic to control the MSR
execute sequence. For the above case, a semaphore will be add between MSR A and B for
all threads. Semaphore has scope info for it. The possible scope value is core or package.
For each thread, when it meets a semaphore during it set registers, it will 1) release
semaphore (+1) for each threads in this core or package(based on the scope info for this
semaphore) 2) acquire semaphore (-1) for all the threads in this core or package(based
on the scope info for this semaphore). With these two steps, driver can control MSR
sequence. Sample code logic like below:
//
// First increase semaphore count by 1 for processors in this package.
//
for (ProcessorIndex = 0; ProcessorIndex < PackageThreadsCount ; ProcessorIndex ++) {
LibReleaseSemaphore ((UINT32 *) &SemaphorePtr[PackageOffset + ProcessorIndex]);
}
//
// Second, check whether the count has reach the check number.
//
for (ProcessorIndex = 0; ProcessorIndex < ValidApCount; ProcessorIndex ++) {
LibWaitForSemaphore (&SemaphorePtr[ApOffset]);
}
Platform Requirement:
1. This change requires register MSR setting base on MSR scope info. If still register MSR
for all threads, exception may raised.
Known limitation:
1. Current CpuFeatures driver supports DXE instance and PEI instance. But semaphore logic
requires Aps execute in async mode which is not supported by PEI driver. So CpuFeature
PEI instance not works after this change. We plan to support async mode for PEI in phase
2 for this task.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
v3 changes:
1. Move CPU_FEATURE_DEPENDENCE_TYPE definition here from RegisterCpuFeaturesLib.h file.
2. Add Invalid type for REGISTER_TYPE which will be used in code.
v2 changes:
1. Add more description about why we do this change.
2. Change structure field type from pointer to EFI_PHYSICAL_ADDRESS because it will
be share between PEI and DXE.
v1 Changes:
In order to support semaphore related logic, add new definition for it.
In a system which has multiple cores, current set register value task costs huge times.
After investigation, current set MSR task costs most of the times. Current logic uses
SpinLock to let set MSR task as an single thread task for all cores. Because MSR has
scope attribute which may cause GP fault if multiple APs set MSR at the same time,
current logic use an easiest solution (use SpinLock) to avoid this issue, but it will
cost huge times.
In order to fix this performance issue, new solution will set MSRs base on their scope
attribute. After this, the SpinLock will not needed. Without SpinLock, new issue raised
which is caused by MSR dependence. For example, MSR A depends on MSR B which means MSR A
must been set after MSR B has been set. Also MSR B is package scope level and MSR A is
thread scope level. If system has multiple threads, Thread 1 needs to set the thread level
MSRs and thread 2 needs to set thread and package level MSRs. Set MSRs task for thread 1
and thread 2 like below:
Thread 1 Thread 2
MSR B N Y
MSR A Y Y
If driver don't control execute MSR order, for thread 1, it will execute MSR A first, but
at this time, MSR B not been executed yet by thread 2. system may trig exception at this
time.
In order to fix the above issue, driver introduces semaphore logic to control the MSR
execute sequence. For the above case, a semaphore will be add between MSR A and B for
all threads. Semaphore has scope info for it. The possible scope value is core or package.
For each thread, when it meets a semaphore during it set registers, it will 1) release
semaphore (+1) for each threads in this core or package(based on the scope info for this
semaphore) 2) acquire semaphore (-1) for all the threads in this core or package(based
on the scope info for this semaphore). With these two steps, driver can control MSR
sequence. Sample code logic like below:
//
// First increase semaphore count by 1 for processors in this package.
//
for (ProcessorIndex = 0; ProcessorIndex < PackageThreadsCount ; ProcessorIndex ++) {
LibReleaseSemaphore ((UINT32 *) &SemaphorePtr[PackageOffset + ProcessorIndex]);
}
//
// Second, check whether the count has reach the check number.
//
for (ProcessorIndex = 0; ProcessorIndex < ValidApCount; ProcessorIndex ++) {
LibWaitForSemaphore (&SemaphorePtr[ApOffset]);
}
Platform Requirement:
1. This change requires register MSR setting base on MSR scope info. If still register MSR
for all threads, exception may raised.
Known limitation:
1. Current CpuFeatures driver supports DXE instance and PEI instance. But semaphore logic
requires Aps execute in async mode which is not supported by PEI driver. So CpuFeature
PEI instance not works after this change. We plan to support async mode for PEI in phase
2 for this task.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1237
Sometimes the memory will be contaminated by random data left in last
boot (warm reset). The code should not assume the allocated memory is
always filled with zero. This patch add code to clear data structure
used for stack switch to prevent such problem from happening.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
V5:
1. Add ASSERT to indicate this assumption that environment is 32 bit mode.
2. Add description in INF about this driver's expected result
in different environment.
V4:
Only disable paging when it is enabled.
V3 changes:
No need to change inf file.
V2 changes:
Only disable paging in 32 bit mode, no matter it is enable or not.
V1 changes:
PEI Stack Guard needs to enable paging. This might cause #GP if code
trying to write CR3 register with PML4 page table while the processor
is enabled with PAE paging.
Simply disabling paging before updating CR3 can solve this conflict.
It's an regression caused by change: 0a0d5296e4
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1232
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1194
Speculative execution is used by processor to avoid having to wait for
data to arrive from memory, or for previous operations to finish, the
processor may speculate as to what will be executed.
If the speculation is incorrect, the speculatively executed instructions
might leave hints such as which memory locations have been brought into
cache. Malicious actors can use the bounds check bypass method (code
gadgets with controlled external inputs) to infer data values that have
been used in speculative operations to reveal secrets which should not
otherwise be accessed.
It is possible for SMI handler(s) to call EFI_SMM_CPU_PROTOCOL service
ReadSaveState() and use the content in the 'CommBuffer' (controlled
external inputs) as the 'CpuIndex'. So this commit will insert AsmLfence
API to mitigate the bounds check bypass issue within SmmReadSaveState().
For SmmReadSaveState():
The 'CpuIndex' will be passed into function ReadSaveStateRegister(). And
then in to ReadSaveStateRegisterByIndex().
With the call:
ReadSaveStateRegisterByIndex (
CpuIndex,
SMM_SAVE_STATE_REGISTER_IOMISC_INDEX,
sizeof(IoMisc.Uint32),
&IoMisc.Uint32
);
The 'IoMisc' can be a cross boundary access during speculative execution.
Later, 'IoMisc' is used as the index to access buffers 'mSmmCpuIoWidth'
and 'mSmmCpuIoType'. One can observe which part of the content within
those buffers was brought into cache to possibly reveal the value of
'IoMisc'.
Hence, this commit adds a AsmLfence() after the check of 'CpuIndex'
within function SmmReadSaveState() to prevent the speculative execution.
A more detailed explanation of the purpose of commit is under the
'Bounds check bypass mitigation' section of the below link:
https://software.intel.com/security-software-guidance/insights/host-firmware-speculative-execution-side-channel-mitigation
And the document at:
https://software.intel.com/security-software-guidance/api-app/sites/default/files/337879-analyzing-potential-bounds-Check-bypass-vulnerabilities.pdf
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
The PCD below is unused, so it has been removed from inf.
gUefiCpuPkgTokenSpaceGuid.PcdCpuFeaturesSupport
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: shenglei <shenglei.zhang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=967
Request to add a library function for GetAcpiTable() in order
to get ACPI table using signature as input.
After evaluation, we found there are many duplicated code to
find ACPI table by signature in different modules.
This patch updates PiSmmCpuDxeSmm to use new
EfiLocateFirstAcpiTable() and remove the duplicated code.
Cc: Younas khan <pmdyounaskhan786@gmail.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
V3 changes include:
1. Keep the ReservedX not change if bit info not changed for this field.
V2 changes include:
1. Use X in ReservedX fields from totally new value if MSR structure definition changed.
For example, if in current structure, the max reserved variable is Reserved2, in new
definition, reserved variable is begin with Reserved3.
V1 Changes:
Changes includes:
1. Update MSR structure definition, change some reserved fields to useful fields:
1. MSR_XEON_PHI_PKG_CST_CONFIG_CONTROL_REGISTER
2. MSR_XEON_PHI_SMM_MCA_CAP_REGISTER
2. For MSR_XEON_PHI_PMG_IO_CAPTURE_BASE_REGISTER structure, it expand the field range.
Old definition like below:
typedef union {
///
/// Individual bit fields
///
struct {
///
/// [Bits 15:0] LVL_2 Base Address (R/W).
///
UINT32 Lvl2Base:16;
///
/// [Bits 18:16] C-state Range (R/W) Specifies the encoding value of the
/// maximum C-State code name to be included when IO read to MWAIT
/// redirection is enabled by MSR_PKG_CST_CONFIG_CONTROL[bit10]: 100b - C4
/// is the max C-State to include 110b - C6 is the max C-State to include.
///
UINT32 CStateRange:3;
UINT32 Reserved1:13;
UINT32 Reserved2:32;
} Bits;
///
/// All bit fields as a 32-bit value
///
UINT32 Uint32;
///
/// All bit fields as a 64-bit value
///
UINT64 Uint64;
} MSR_XEON_PHI_PMG_IO_CAPTURE_BASE_REGISTER;
This patch make below changes for this data structure, it expand "CStateRange" field width.
old one:
UINT32 CStateRange:3;
UINT32 Reserved1:13;
new one:
UINT32 CStateRange:7;
UINT32 Reserved1:9;
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
V3 changes include:
1. Keep ReservedX not change if bit info not changed for this field.
V2 changes include:
1. Use X in ReservedX fields from totally new value if MSR structure definition changed.
For example, if in current structure, the max reserved variable is Reserved2, in new
definition, reserved variable is begin with Reserved3.
V1 Changes includes:
1. Change fields which is reserved in old version: MSR_IA32_RTIT_CTL_REGISTER
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Changes includes:
1. Remove old MSR which not existed in 2018-05 version spec:
1. MSR_CORE_ROB_CR_BKUPTMPDR6
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Changes includes:
1. Remove MSR which not existed in 2018-05 version spec: MSR_P6_ROB_CR_BKUPTMPDR6.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Changes includes:
1. Remove old MSR which not existed in 2018-05 version spec:
1. MSR_CORE2_BBL_CR_CTL3
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Changes includes:
1. Add new MSR file which used for goldmont plus microarchitecture.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Latest SDM has moved MSR related content from volume 3 chapter 35 to volume 4
chapter 2. Current MSR's comments need to be updated to reference the new
chapter info.
Changes includes:
1. Update referenced chapter info from some MSRs.
2. Update referenced SDM version info.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1187
The patch reverts 9c8c4478cf
"UefiCpuPkg/MtrrLib: Skip Base MSR access when the pair is invalid".
Microsoft Windows will report an error in event manager if MTRR
usage is different across hibernate even when the difference is
in an non valid MTRR pair. This seems like a bug in Windows but
for compatibility and servicing reasons we think a change in UEFI
would wise.
A Windows change has already been submitted for the next iteration
(2019 time frame).
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Sean Brogan <sean.brogan@microsoft.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1166
Visual Studio 2012 will complain uninitialized variable, StackBase,
in the CpuPaging.c. This patch adds code to init it to zero and
ASSERT check against 0. This is enough since uninit case will only
happen during retrieving stack memory via gEfiHobMemoryAllocStackGuid.
But this HOB will always be created in advance.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Hao A Wu <hao.a.wu@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1191
Before commit e21e355e2c, jmp _SmiHandler
is commented. And below code, ASM_PFX(CpuSmmDebugEntry) is moved into rax,
then call it. But, this code doesn't work in XCODE5 tool chain. Because XCODE5
doesn't generated the absolute address in the EFI image. So, rax stores the
relative address. Once this logic is moved to another place, it will not work.
; jmp _SmiHandler ; instruction is not needed
...
mov rax, ASM_PFX(CpuSmmDebugEntry)
call rax
Commit e21e355e2c is to support XCODE5.
One tricky way is selected to fix it. Although SmiEntry logic is copied to
another place and run, but here jmp _SmiHandler is enabled to jmp the original
code place, then call ASM_PFX(CpuSmmDebugEntry) with the relative address.
mov rax, strict qword 0 ; mov rax, _SmiHandler
_SmiHandlerAbsAddr:
jmp rax
...
call ASM_PFX(CpuSmmDebugEntry)
Now, BZ 1191 raises the issue that SmiHandler should run in the copied address,
can't run in the common address. So, jmp _SmiHandler is required to be removed,
the code is kept to run in copied address. And, the relative address is
requried to be fixed up to the absolute address. The necessary changes should
not affect the behavior of platforms that already consume PiSmmCpuDxeSmm.
OVMF SMM boot to shell with VS2017, GCC5 and XCODE5 tool chain has been verified.
...
mov rax, strict qword 0 ; call ASM_PFX(CpuSmmDebugEntry)
CpuSmmDebugEntryAbsAddr:
call rax
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Some redundant library classes Ppis and GUIDs
have been removed in inf, .c and .h files.
v2:
1.Remove ReadOnlyVariable2.h in S3Resume.c which should be
deleted in last version in which gEfiPeiReadOnlyVariable2PpiGuid
was removed.
2.Remove the library class BaseLib in CpuPageTable.c
which is included elsewhere.
3.Add library classes in SecCore.inf which are removed
at last version.
They are DebugAgentLib and CpuExceptionHandlerLib.
4.Add two Ppis in SecCore.inf which are removed
at last version.
They are gEfiSecPlatformInformationPpiGuid and
gEfiSecPlatformInformation2PpiGuid.
https://bugzilla.tianocore.org/show_bug.cgi?id=1043https://bugzilla.tianocore.org/show_bug.cgi?id=1013https://bugzilla.tianocore.org/show_bug.cgi?id=1032https://bugzilla.tianocore.org/show_bug.cgi?id=1016
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: shenglei <shenglei.zhang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
BZ#: https://bugzilla.tianocore.org/show_bug.cgi?id=1165
InitSmmS3Cr3 () will update SmmS3ResumeState so moving the calling of
it into else block to keep the logic consistency.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Benjamin You <benjamin.you@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
BZ#: https://bugzilla.tianocore.org/show_bug.cgi?id=1165
HOB gEfiAcpiVariableGuid is a must have data for S3 resume if
PcdAcpiS3Enable is set to TRUE. Current code in CpuS3.c doesn't
embody this strong binding between them. An error message and
CpuDeadLoop are added in this patch to warn platform developer
about it.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Benjamin You <benjamin.you@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1164
The left operand is 64-bit but right operand could be 32-bit.
A typecast is a must because of '~' op before it.
Cc: Hao A Wu <hao.a.wu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Hao Wu <hao.a.wu@intel.com>
BZ#: https://bugzilla.tianocore.org/show_bug.cgi?id=1160
There're two parameters which have different name in comment and prototype.
Cc: Dandan Bi <dandan.bi@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>
This feature is the same as Stack Guard enabled in driver CpuDxe but
applies to PEI phase. Due to the specialty in PEI module dispatching,
this driver is changed to do the actual initialization in notify
callback of event gEfiPeiMemoryDiscoveredPpiGuid. This can let the
stack guard apply to as most PEI drivers as possible.
To let Stack Guard work, some simple page table management code are
introduced to setup Guard page at base of stack for each processor.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: "Ware, Ryan R" <ryan.r.ware@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
The conflict issues are introduced by Stack Guard feature enabled for
PEI.
The first is CR0 which should be restored after CR3 and CR4.
Another is TR which should not be passed from BSP to AP during init
phase.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: "Ware, Ryan R" <ryan.r.ware@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Stack Guard needs to setup stack switch capability to allow exception
handler to be called with good stack if stack overflow is detected.
This patch update InitializeCpuExceptionHandlersEx() to allow pass
extra initialization data used to setup exception stack switch for
specified exceptions.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: "Ware, Ryan R" <ryan.r.ware@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Fix trailing white spaces and invalid line ending issue.
Cc: Dandan Bi <dandan.bi@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>
When an exception happens in AP, system hangs at
GetPeiServicesTablePointer(), complaining the PeiServices retrieved
from memory before IDT is NULL.
Due to the following commit:
c563077a38
* UefiCpuPkg/MpInitLib: Avoid calling PEI services from AP
the IDT used by AP no longer preserve PeiServices pointer in the
very beginning.
But the implementation of PeiExceptionHandlerLib still assumes
the PeiServices pointer is there, so the assertion happens.
The patch fixes the exception handler library to not call
PEI services from AP.
The patch duplicates the #0 exception stub header in an allocated
pool but with extra 4-byte/8-byte to store the exception handler
data which was originally stored in HOB.
When AP exception happens, the code gets the exception handler data
from the exception handler for #0.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Fan Jeff <vanjeff_919@hotmail.com>
Today's implementation of handling HOOK_BEFORE and HOOK_AFTER is
a bit complex. More comments is better.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Fan Jeff <vanjeff_919@hotmail.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
BZ#1127: https://bugzilla.tianocore.org/show_bug.cgi?id=1127
It's reported the debug message in CpuDxe driver is quite annoying in
boot and shell, and slow down the boot process. To solve this issue,
this patch changes the DEBUG_INFO to DEBUG_VERBOSE. On a typical Intel
real platform, at least 16s boot time can be saved.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Since SMM profile feature has already implemented non-stop mode if #PF
occurred, this patch just makes use of the existing implementation to
accommodate heap guard and NULL pointer detection feature.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Same as SMM profile feature, a special #PF is used to set page attribute
to 'present' and a special #DB handler to reset it back to 'not-present',
right after the instruction causing #PF got executed.
Since the new #PF handler won't enter into dead-loop, the instruction
which caused the #PF will get chance to re-execute with accessible pages.
The exception message will still be printed out on debug console so that
the developer/QA can find that there's potential heap overflow or null
pointer access occurred.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Once the #PF handler has set the page to be 'present', there should
be a way to reset it to 'not-present'. 'TF' bit in EFLAGS can be used
for this purpose. 'TF' bit will be set in interrupted function context
so that it can be triggered once the cpu control returns back to the
instruction causing #PF and re-execute it.
This is an necessary step to implement non-stop mode for Heap Guard
and NULL Pointer Detection feature.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1093
Return Stack Buffer (RSB) is used to predict the target of RET
instructions. When the RSB underflows, some processors may fall back to
using branch predictors. This might impact software using the retpoline
mitigation strategy on those processors.
This commit will add RSB stuffing logic before returning from SMM (the RSM
instruction) to avoid interfering with non-SMM usage of the retpoline
technique.
After the stuffing, RSB entries will contain a trap like:
@SpecTrap:
pause
lfence
jmp @SpecTrap
A more detailed explanation of the purpose of commit is under the
'Branch target injection mitigation' section of the below link:
https://software.intel.com/security-software-guidance/insights/host-firmware-speculative-execution-side-channel-mitigation
Please note that this commit requires further actions (BZ 1091) to remove
the duplicated 'StuffRsb.inc' files and merge them into one under a
UefiCpuPkg package-level directory (such as UefiCpuPkg/Include/).
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1091
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1093
Return Stack Buffer (RSB) is used to predict the target of RET
instructions. When the RSB underflows, some processors may fall back to
using branch predictors. This might impact software using the retpoline
mitigation strategy on those processors.
This commit will add RSB stuffing logic before returning from SMM (the RSM
instruction) to avoid interfering with non-SMM usage of the retpoline
technique.
After the stuffing, RSB entries will contain a trap like:
@SpecTrap:
pause
lfence
jmp @SpecTrap
A more detailed explanation of the purpose of commit is under the
'Branch target injection mitigation' section of the below link:
https://software.intel.com/security-software-guidance/insights/host-firmware-speculative-execution-side-channel-mitigation
Please note that this commit requires further actions (BZ 1091) to remove
the duplicated 'StuffRsb.inc' files and merge them into one under a
UefiCpuPkg package-level directory (such as UefiCpuPkg/Include/).
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1091
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
V1 changes:
> Current code logic can't confirm CpuS3DataDxe driver start before
> CpuFeaturesDxe driver. So the assumption in CpuFeaturesDxe not valid.
> Add implementation for AllocateAcpiCpuData function to remove this
> assumption.
V2 changes:
> Because CpuS3Data memory will be copy to smram at SmmReadToLock point,
> so the memory type no need to be ACPI NVS type, also the address not
> limit to below 4G.
> This change remove the limit of ACPI NVS memory type and below 4G.
V3 changes:
> Remove function definition in header file.
> Add STATIC in function implementation.
Pass OS boot and resume from S3 test.
Bugz: https://bugzilla.tianocore.org/show_bug.cgi?id=959
Reported-by: Marvin Häuser <Marvin.Haeuser@outlook.com>
Suggested-by: Fan Jeff <vanjeff_919@hotmail.com>
Cc: Marvin Häuser <Marvin.Haeuser@outlook.com>
Cc: Fan Jeff <vanjeff_919@hotmail.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Because PrepareApStartupVector() stores StackAddress to
"mExchangeInfo->StackStart" (which has type (VOID*)), and because
"UefiCpuPkg/PiSmmCpuDxeSmm/X64/MpFuncs.nasm" reads the latter with:
add edi, StackStartAddressLocation
add rax, qword [edi]
mov rsp, rax
mov qword [edi], rax
in long-mode code. So code can remove below 4G limitation.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Because CpuS3Data memory will be copy to smram at SmmReadyToLock point,
the memory type no need to be ACPI NVS type, also the address not
limit to below 4G.
This change remove the limit of ACPI NVS memory type and below 4G.
Pass OS boot and resume from S3 test.
Cc: Marvin Häuser <Marvin.Haeuser@outlook.com>
Cc: Fan Jeff <vanjeff_919@hotmail.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
ACPI_CPU_DATA structure first introduced to save data in
normal boot phase. Also this data will be used in S3 phase
by one PEI driver. So in first phase, this data is been
defined to use ACPI NVS memory type and must below 4G.
Later in order to fix potential security issue,
PiSmmCpuDxeSmm driver added logic to copy ACPI_CPU_DATA
(except ResetVector and Stack buffer) to smram at smm
ready to lock point. ResetVector must below 1M and Stack
buffer is write only in S3 phase, so these two fields not
copy to smram. Also PiSmmCpuDxeSmm driver owned the task
to restore the CPU setting and it's a SMM driver.
After above change, the acpi nvs memory type and below 4G
limitation is no longer needed.
This change remove the limitation in the comments for
ACPI_CPU_DATA definition.
Cc: Marvin Häuser <Marvin.Haeuser@outlook.com>
Cc: Fan Jeff <vanjeff_919@hotmail.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Current implementation will copy GDT/IDT at SmmReadyToLock point
from ACPI NVS memory to Smram. Later at S3 resume phase, it restore
the memory saved in Smram to ACPI NVS. It can directly use GDT/IDT
saved in Smram instead of restore the original ACPI NVS memory.
This patch do this change.
Test Done:
Do the OS boot and S3 resume test.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Merge [Sources.Ia32, Sources.X64] to [Sources] after removing IPF. Also
change other similar parts in this file.
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chen A Chen <chen.a.chen@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Within function GetUefiMemoryAttributesTable(), add a check to avoid
possible null pointer dereference.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
It treats the UEFI runtime page with EFI_MEMORY_RO attribute as
invalid SMM communication buffer.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
It treats GCD untested memory as invalid SMM
communication buffer.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Base on UEFI spec requirement, StartAllAPs function should not use the APs which has been disabled before. This patch just change current code to follow this rule.
V3 changes:
Only called by StartUpAllAps, WakeUpAp will not wake up the disabled APs, in other cases also need to include the disabled APs, such as CpuDxe driver start up and ChangeApLoopCallback function.
WakeUpAP() is called with (Broadcast && WakeUpDisabledAps) from MpInitLibInitialize(), CollectProcessorCount() and MpInitChangeApLoopCallback() only. The first two run before the PPI or Protocol user has a chance to disable any APs. The last one runs in response to the ExitBootServices and LegacyBoot events, after which the MP protocol is unusable. For this reason, it doesn't matter that an originally disabled AP's state is not restored to Disabled, when
WakeUpAP() is called with (Broadcast && WakeUpDisabledAps).
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
The patch includes below changes:
(1) It removes "volatile" from RunningCount, because only the BSP modifies it.
(2) When we detect a timeout in CheckAllAPs(), and collect the list of failed CPUs, the size of the list is derived from the following difference, before the patch:
StartCount - FinishedCount
where "StartCount" is set by the BSP at startup, and FinishedCount is incremented by the APs themselves.
Here the patch replaces this difference with
StartCount - RunningCount
that is, the difference is no more calculated from the BSP's startup counter and the AP's shared finish counter, but from the RunningCount measurement that the BSP does itself, in CheckAllAPs().
(3) Finally, the patch changes the meaning of RunningCount. Before the patch, we have:
- StartCount: the number of APs the BSP stars up,
- RunningCount: the number of finished APs that the BSP collected
After the patch, StartCount is removed, and RunningCount is *redefined* as the following difference:
OLD_StartCount - OLD_RunningCount
Giving the number of APs that the BSP started up but hasn't collected yet.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Current CPU state definition include CpuStateIdle and CpuStateFinished.
After investigation, current code can use CpuStateIdle to replace the
CpuStateFinished. It will reduce the state number and easy for maintenance.
> Before this patch, the state transitions for an AP are:
>
> Idle ----> Ready ----> Busy ----> Finished ----> Idle
> [BSP] [AP] [AP] [BSP]
>
> After the patch, the state transitions for an AP are:
>
> Idle ----> Ready ----> Busy ----> Idle
> [BSP] [AP] [AP]
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Currently, the SecPlatformInformation2 PPI is installed when either
there is none present or the present one doesn't lack data.
Update the logic to only install the SecPlatformInformation2 PPI when
it's not already installed so that an up-to-date PPI remains the only
one and unchanged.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Current IsInSmm() method makes use of gEfiSmmBase2ProtocolGuid.InSmm() to
check if current processor is in SMM mode or not. But this is not correct
because gEfiSmmBase2ProtocolGuid.InSmm() can only detect if the caller is
running in SMRAM or from SMM driver. It cannot guarantee if the caller is
running in SMM mode. Because SMM mode will load its own page table, adding
an extra check of saved DXE page table base address against current CR3
register value can help to get the correct answer for sure (in SMM mode or
not in SMM mode).
There's indiscriminate uses of Context.X64 and Context.Ia32 in code which
is not a good coding practice and will cause potential issue. In addition,
the related structure type definition is not packed and has also potential
issue. This will not be covered by this patch but be tracked by a bug below.
https://bugzilla.tianocore.org/show_bug.cgi?id=1039
This is an issue caused by check-in at
2a1408d1d7
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Current function has low performance because it calls GetApicId
in the loop, so it maybe called more than once.
New logic call GetApicId once and base on this value to search
the processor.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jeff Fan <vanjeff_919@hotmail.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
When resume from S3 and CPU loop mode is MWait mode,
if driver calls APs to do task at EndOfPei point, the
APs can't been wake up and bios hang at that point.
The root cause is PiSmmCpuDxeSmm driver wakes up APs
with HLT mode during S3 resume phase to do SMM relocation.
After this task, PiSmmCpuDxeSmm driver not restore APs
context which make CpuMpPei driver saved wake up buffer
not works.
The solution for this issue is let CpuMpPei driver hook
S3SmmInitDone ppi notification. In this notify function,
it check whether Cpu Loop mode is not HLT mode. If yes,
CpuMpPei driver will set a flag to force BSP use INIT-SIPI
-SIPI command to wake up the APs.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
The SDM requires only one thread per core to load the
microcode.
This change enables this solution.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Search uCode costs much time, if AP has same processor type
with BSP, AP can use BSP saved uCode info to get better performance.
This change enables this solution.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Read uCode from memory has better performance than from flash.
But it needs extra effort to let BSP copy uCode from flash to
memory. Also BSP already enable cache in SEC phase, so it use
less time to relocate uCode from flash to memory. After
verification, if system has more than one processor, it will
reduce some time if load uCode from memory.
This change enable this optimization.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Today's MpInitLib PEI implementation directly calls
PeiServices->GetHobList() from AP which may cause racing issue.
This patch fixes this issue by duplicating IDT for APs.
Because CpuMpData structure is stored just after IDT, the CpuMPData
address equals to IDTR.BASE + IDTR.LIMIT + 1.
v2:
1. Add ALIGN_VALUE() on BufferSize.
2. Add ASSERT() to make sure no memory usage outside of the allocated buffer.
3. Add more comments in InitConfig path when restoring CpuData[0].VolatileRegisters.
Cc: Jeff Fan <vanjeff_919@hotmail.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Fish Andrew <afish@apple.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Removing rules for Ipf sources file:
* Remove the source file which path with "ipf" and also listed in
[Sources.IPF] section of INF file.
* Remove the source file which listed in [Components.IPF] section
of DSC file and not listed in any other [Components] section.
* Remove the embedded Ipf code for MDE_CPU_IPF.
Removing rules for Inf file:
* Remove IPF from VALID_ARCHITECTURES comments.
* Remove DXE_SAL_DRIVER from LIBRARY_CLASS in [Defines] section.
* Remove the INF which only listed in [Components.IPF] section in DSC.
* Remove statements from [BuildOptions] that provide IPF specific flags.
* Remove any IPF sepcific sections.
Removing rules for Dec file:
* Remove [Includes.IPF] section from Dec.
Removing rules for Dsc file:
* Remove IPF from SUPPORTED_ARCHITECTURES in [Defines] section of DSC.
* Remove any IPF specific sections.
* Remove statements from [BuildOptions] that provide IPF specific flags.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chen A Chen <chen.a.chen@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
1. Do not use tab characters
2. No trailing white space in one line
3. All files must end with CRLF
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Replace old Perf macros with the new added ones.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The MdePkg/Library/SmmMemoryAllocationLib, used only by DXE_SMM_DRIVER,
allows to free memory allocated in DXE (before EndOfDxe). This is done
by checking the memory range and calling gBS services to do real
operation if the memory to free is out of SMRAM. If some memory related
features, like Heap Guard, are enabled, gBS interface will turn to
EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes(), provided by
DXE driver UefiCpuPkg/CpuDxe, to change memory paging attributes. This
means we have part of DXE code running in SMM mode in certain
circumstances.
Because page table in SMM mode is different from DXE mode and CpuDxe
always uses current registers (CR0, CR3, etc.) to get memory paging
attributes, it cannot get the correct attributes of DXE memory in SMM
mode from SMM page table. This will cause incorrect memory manipulations,
like fail the releasing of Guard pages if Heap Guard is enabled.
The solution in this patch is to store the DXE page table information
(e.g. value of CR0, CR3 registers, etc.) in a global variable of CpuDxe
driver. If CpuDxe detects it's in SMM mode, it will use this global
variable to access page table instead of current processor registers.
This can avoid retrieving wrong DXE memory paging attributes and changing
SMM page table attributes unexpectedly.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
On AMD processors the second SendIpi in the SendInitSipiSipi and
SendInitSipiSipiAllExcludingSelf routines is not required, and may cause
undesired side-effects during MP initialization.
This patch leverages the StandardSignatureIsAuthenticAMD check to exclude
the second SendIpi and its associated MicroSecondDelay (200).
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Leo Duran <leo.duran@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
NASM has replaced ASM and S files.
1. Remove ASM from all modules expect for the ones in ResetVector directory.
The ones in ResetVector directory are included by Vtf0.nasmb. They are
also nasm style.
2. Remove S files from the drivers only.
3. https://bugzilla.tianocore.org/show_bug.cgi?id=881
After NASM is updated, S files can be removed from Library.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
According to IA manual:
"Before setting this bit (MSR_IA32_MISC_ENABLE[22]) , BIOS must
execute the CPUID.0H and examine the maximum value returned in
EAX[7:0]. If the maximum value is greater than 2, this bit is
supported."
We need to fix our current detection logic to compare against 2.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Ming Shao <ming.shao@intel.com>
The function SecStartup() is not supposed to return. Hence, add the
NORETURN decorator.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
NASM introduced FXSAVE / FXRSTOR support in commit 900fa5b26b8f ("NASM
0.98p3-hpa", 2002-04-30), which commit stands for the nasm-0.98p3-hpa
release.
NASM introduced FXSAVE64 / FXRSTOR64 support in commit 3a014348ca15
("insns: add FXSAVE64/FXRSTOR64, drop np prefix", 2010-07-07), which was
part of the "nasm-2.09" release.
Edk2 requires nasm-2.10 or later for use with the GCC toolchain family,
and nasm-2.12.01 or later for use with all other toolchain families.
Replace the binary encoding of the FXSAVE(64)/FXRSTOR(64) instructions
with mnemonics.
I verified that the "Ia32/SmiException.obj", "X64/SmiEntry.obj" and
"X64/SmiException.obj" files are rebuilt after this patch, without any
change in content.
This patch removes the last instructions encoded with DBs from
PiSmmCpuDxeSmm.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
(1) SmmRelocationSemaphoreComplete32() runs in 32-bit mode, so wrap it in
a (BITS 32 ... BITS 64) bracket.
(2) SmmRelocationSemaphoreComplete32() currently compiles to:
> 000002AE C6050000000001 mov byte [dword 0x0],0x1
> 000002B5 FF2500000000 jmp dword [dword 0x0]
where the first instruction is patched with the contents of
"mRebasedFlag" (so that (*mRebasedFlag) is set to 1), and the second
instruction is patched with the address of
"mSmmRelocationOriginalAddress" (so that we jump to
"mSmmRelocationOriginalAddress").
In its current form the first instruction could not be patched with
PatchInstructionX86(), given that the operand to patch is not encoded
in the trailing bytes of the instruction. Therefore, adopt an
EAX-based version, inspired by both the IA32 and X64 variants of
SmmRelocationSemaphoreComplete():
> 000002AE 50 push eax
> 000002AF B800000000 mov eax,0x0
> 000002B4 C60001 mov byte [eax],0x1
> 000002B7 58 pop eax
> 000002B8 FF2500000000 jmp dword [dword 0x0]
Here both instructions can be patched with PatchInstructionX86(), and
the DBs can be replaced with native NASM syntax.
(3) Turn the "mRebasedFlagAddr32" and "mSmmRelocationOriginalAddressPtr32"
variables into markers that suit PatchInstructionX86().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmmInitStack" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".
The size of the patched source operand is (sizeof (UINTN)).
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its
PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can
simply use the NASM syntax for the following Mixed-Size Jump:
> jmp PROTECT_MODE_CS : dword @32bit
The generated object code for the instruction is unchanged:
> 00000182 66EA5A0000000800 jmp dword 0x8:0x5a
(The NASM manual explains that putting the DWORD prefix after the colon
":" reflects the intent better, since it is the offset that is a DWORD.
Thus, that's what I used. However, both syntaxes are interchangeable,
hence the ndisasm output.)
The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr";
however that's accidental, not inherent:
- Bring LONG_MODE_CODE_SEGMENT from
"UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as
LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32
version as PROTECT_MODE_CS earlier.
- Apply the NASM-native Mixed-Size Jump syntax again, but jump to the
fixed zero offset in LONG_MODE_CS. This will produce no relocation
record at all. Add a label after the instruction.
- Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards
from the label. Because we modify the DWORD offset with a DWORD access,
the segment selector is unharmed in the instruction, and we need not set
it from PiCpuSmmEntry().
According to "objdump --reloc", the X64 version undergoes only the
following relocations, after this patch:
> RELOCATION RECORDS FOR [.text]:
> OFFSET TYPE VALUE
> 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004
> 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004
> 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004
Therefore the patch does not regress
<https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool
chain for UefiCpuPkg with nasm source code").
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Like "gSmmCr4" in the previous patch, "gSmmCr0" is not only used for
machine code patching, but also as a means to communicate the initial CR0
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr0Value" instruction's binary
representation are utilized as normal data too.
In order to get rid of the DB for "mov eax, Cr0Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr0" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr0" variable to
"gPatchSmmCr0" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr0".
This lets us remove the binary (DB) encoding of "mov eax, Cr0Value" in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>