REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3300
Current implementation of SetStaticPageTable routine in PiSmmCpuDxeSmm
driver will check a global variable mPhysicalAddressBits, and eventually
cap any value larger than 39 at 39.
This global variable is used in ConvertMemoryPageAttributes, which backs
SmmSetMemoryAttributes and SmmClearMemoryAttributes. Thus for a processor
that supports more than 39 bits width, trying to mark page table regions
higher than 39-bit will always return EFI_UNSUPPROTED.
This change updated the interface of SetStaticPageTable function to take
PhysicalAddressBits as an input parameter, in order to avoid changing/
accessing the global variable.
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Fixes: 4eee0cc7cc
Signed-off-by: Kun Qin <kuqin12@gmail.com>
In function InitGdt(), SmiPFHandler() and Gen4GPageTable(), it uses
CpuIndex * mSmmStackSize to get the SMM stack address offset for
multi processor. It misses the SMM Shadow Stack Size. Each processor
will use mSmmStackSize + mSmmShadowStackSize in the memory.
It should use CpuIndex * (mSmmStackSize + mSmmShadowStackSize) to get
this SMM stack address offset. If mSmmShadowStackSize > 0 and multi
processor enabled, it will get the wrong offset value.
CET shadow stack feature will set the value of mSmmShadowStackSize.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3237
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Roger Feng <roger.feng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
If CET shadows stack feature enabled in SMM and stack switch is enabled.
When code execute from SMM handler to SMM exception, CPU will check SMM
exception shadow stack token busy bit if it is cleared or not.
If it is set, it will trigger #DF exception.
If it is not set, CPU will set the busy bit when enter SMM exception.
So, the busy bit should be cleared when return back form SMM exception to
SMM handler. Otherwise, keeping busy bit 1 will cause to trigger #DF
exception when enter SMM exception next time.
So, we use instruction SAVEPREVSSP, CLRSSBSY and RSTORSSP to clear the
shadow stack token busy bit before RETF instruction in SMM exception.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3192
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Roger Feng <roger.feng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
When trying to get page table base, if mInternalCr3 is zero, it will use
the page table from CR3, and reflect the page table depth by CR4 LA57 bit.
If mInternalCr3 is non zero, it will use the page table from mInternalCr3
and reflect the page table depth of mInternalCr3 at same time.
In the case of X64, we use m5LevelPagingNeeded to reflect the depth of
the page table. And in the case of IA32, it will not the page table depth
information.
This patch is a bug fix when enable CET feature with 5 level paging.
The SMM page tables are allocated / initialized in PiCpuSmmEntry().
When CET is enabled, PiCpuSmmEntry() must further modify the attribute of
shadow stack pages. This page table is not set to CR3 in PiCpuSmmEntry().
So the page table base address is set to mInternalCr3 for modifty the
page table attribute. It could not use CR4 LA57 bit to reflect the
page table depth for mInternalCr3.
So we create a architecture-specific implementation GetPageTable() with
2 output parameters. One parameter is used to output the page table
address. Another parameter is used to reflect if it is 5 level paging
or not.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3015
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2198
Typically, an AP is booted using the INIT-SIPI-SIPI sequence. This
sequence is intercepted by the hypervisor, which sets the AP's registers
to the values requested by the sequence. At that point, the hypervisor can
start the AP, which will then begin execution at the appropriate location.
Under SEV-ES, AP booting presents some challenges since the hypervisor is
not allowed to alter the AP's register state. In this situation, we have
to distinguish between the AP's first boot and AP's subsequent boots.
First boot:
Once the AP's register state has been defined (which is before the guest
is first booted) it cannot be altered. Should the hypervisor attempt to
alter the register state, the change would be detected by the hardware
and the VMRUN instruction would fail. Given this, the first boot for the
AP is required to begin execution with this initial register state, which
is typically the reset vector. This prevents the BSP from directing the
AP startup location through the INIT-SIPI-SIPI sequence.
To work around this, the firmware will provide a build time reserved area
that can be used as the initial IP value. The hypervisor can extract this
location value by checking for the SEV-ES reset block GUID that must be
located 48-bytes from the end of the firmware. The format of the SEV-ES
reset block area is:
0x00 - 0x01 - SEV-ES Reset IP
0x02 - 0x03 - SEV-ES Reset CS Segment Base[31:16]
0x04 - 0x05 - Size of the SEV-ES reset block
0x06 - 0x15 - SEV-ES Reset Block GUID
(00f771de-1a7e-4fcb-890e-68c77e2fb44e)
The total size is 22 bytes. Any expansion to this block must be done
by adding new values before existing values.
The hypervisor will use the IP and CS values obtained from the SEV-ES
reset block to set as the AP's initial values. The CS Segment Base
represents the upper 16 bits of the CS segment base and must be left
shifted by 16 bits to form the complete CS segment base value.
Before booting the AP for the first time, the BSP must initialize the
SEV-ES reset area. This consists of programming a FAR JMP instruction
to the contents of a memory location that is also located in the SEV-ES
reset area. The BSP must program the IP and CS values for the FAR JMP
based on values drived from the INIT-SIPI-SIPI sequence.
Subsequent boots:
Again, the hypervisor cannot alter the AP register state, so a method is
required to take the AP out of halt state and redirect it to the desired
IP location. If it is determined that the AP is running in an SEV-ES
guest, then instead of calling CpuSleep(), a VMGEXIT is issued with the
AP Reset Hold exit code (0x80000004). The hypervisor will put the AP in
a halt state, waiting for an INIT-SIPI-SIPI sequence. Once the sequence
is recognized, the hypervisor will resume the AP. At this point the AP
must transition from the current 64-bit long mode down to 16-bit real
mode and begin executing at the derived location from the INIT-SIPI-SIPI
sequence.
Another change is around the area of obtaining the (x2)APIC ID during AP
startup. During AP startup, the AP can't take a #VC exception before the
AP has established a stack. However, the AP stack is set by using the
(x2)APIC ID, which is obtained through CPUID instructions. A CPUID
instruction will cause a #VC, so a different method must be used. The
GHCB protocol supports a method to obtain CPUID information from the
hypervisor through the GHCB MSR. This method does not require a stack,
so it is used to obtain the necessary CPUID information to determine the
(x2)APIC ID.
The new 16-bit protected mode GDT entry is used in order to transition
from 64-bit long mode down to 16-bit real mode.
A new assembler routine is created that takes the AP from 64-bit long mode
to 16-bit real mode. This is located under 1MB in memory and transitions
from 64-bit long mode to 32-bit compatibility mode to 16-bit protected
mode and finally 16-bit real mode.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
AMD does not support MSR_IA32_MISC_ENABLE. Accessing that register
causes and exception on AMD processors. If Execution Disable is
supported, but if the processor is an AMD processor, skip manipulating
MSR_IA32_MISC_ENABLE[34] XD Disable bit.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Garrett Kirkendall <garrett.kirkendall@amd.com>
Message-Id: <20200622131825.1352-5-Garrett.Kirkendall@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Fix various typos in comments and documentation.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Antoine Coeur <coeur@gmx.fr>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Signed-off-by: Philippe Mathieu-Daude <philmd@redhat.com>
Message-Id: <20200207010831.9046-78-philmd@redhat.com>
In MpLib.c, remove the white space on a new line.
In PageTbl.c and PiSmmCpuDxeSmm.h, update the comment style.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Today's behavior is to enable 5l paging when CPU supports it
(CPUID[7,0].ECX.BIT[16] is set).
The patch changes the behavior to enable 5l paging when two
conditions are both met:
1. CPU supports it;
2. The max physical address bits is bigger than 48.
Because 4-level paging can support to address physical address up to
2^48 - 1, there is no need to enable 5-level paging with max
physical address bits <= 48.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Today's behavior is to always restrict access to non-SMRAM regardless
the value of PcdCpuSmmRestrictedMemoryAccess.
Because RAS components require to access all non-SMRAM memory, the
patch changes the code logic to honor PcdCpuSmmRestrictedMemoryAccess
so that only when the PCD is true, the restriction takes affect and
page table memory is also protected.
Because IA32 build doesn't reference this PCD, such restriction
always takes affect in IA32 build.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
The patch changes PiSmmCpu driver to consume PCD
PcdCpuSmmRestrictedMemoryAccess.
Because the behavior controlled by PcdCpuSmmStaticPageTable in
original code is not changed after switching to
PcdCpuSmmRestrictedMemoryAccess.
The functionality is not impacted by this patch.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reclaim may free page table pages that are required to handle current page
fault. This causes a page leak, and, after sufficent number of specific
page fault+reclaim pairs, we run out of reclaimable pages and hit:
ASSERT (MinAcc != (UINT64)-1);
To remedy, prevent pages essential to handling current page fault:
(1) from being considered as reclaim candidates (first reclaim phase)
(2) from being freed as part of "branch cleanup" (second reclaim phase)
Signed-off-by: Damian Nikodem <damian.nikodem@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Krzysztof Rusocki <krzysztof.rusocki@intel.com>
The pointer Pml5Entry, returned from call to function
AllocatePageTableMemory, may be null.
So add check for it.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Fixes: 4eee0cc7c
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1946
The patch changes SMM environment to use 5 level paging when CPU
supports it.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
(cherry picked from commit 7365eb2c8c)
This reverts commit 7365eb2c8c.
Commit
7c5010c7f8 MdePkg/BaseLib.h: Update IA32_CR4 structure for 5-level paging
technically breaks the EDKII development process documented in
https://github.com/tianocore/tianocore.github.io/wiki/EDK-II-Development-Process
and Maintainers.txt in EDKII repo root directory.
The voilation is commit 7c5010c7f8 doesn't have a Reviewed-by or
Acked-by from MdePkg maintainers.
In order to revert 7c5010c7f8, 7365eb2c8 needs to revert first otherwise
simply reverting 7c5010c7f8 will cause build break.
Signed-off-by: Ray Ni <ray.ni@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1946
The patch changes SMM environment to use 5 level paging when CPU
supports it.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1593
For every SMI occurrence, save and restore CR2 register only when SMM
on-demand paging support is enabled in 64 bit operation mode.
This is not a bug but to have better improvement of code.
Patch5 is updated with separate functions for Save and Restore of CR2
based on review feedback.
Patch6 - Removed Global Cr2 instead used function parameter.
Patch7 - Removed checking Cr2 with 0 as per feedback.
Patch8 and 9 - Aligned with EDK2 Coding style.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Vanguput Narendra K <narendra.k.vanguput@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Yao Jiewen <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Nate DeSimone <nathaniel.l.desimone@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1521
We scan the SMM code with ROPgadget.
http://shell-storm.org/project/ROPgadget/https://github.com/JonathanSalwan/ROPgadget/tree/master
This tool reports the gadget in SMM driver.
This patch enabled CET ShadowStack for X86 SMM.
If CET is supported, SMM will enable CET ShadowStack.
SMM CET will save the OS CET context at SmmEntry and
restore OS CET context at SmmExit.
Test:
1) test Intel internal platform (x64 only, CET enabled/disabled)
Boot test:
CET supported or not supported CPU
on CET supported platform
CET enabled/disabled
PcdCpuSmmCetEnable enabled/disabled
Single core/Multiple core
PcdCpuSmmStackGuard enabled/disabled
PcdCpuSmmProfileEnable enabled/disabled
PcdCpuSmmStaticPageTable enabled/disabled
CET exception test:
#CF generated with PcdCpuSmmStackGuard enabled/disabled.
Other exception test:
#PF for normal stack overflow
#PF for NX protection
#PF for RO protection
CET env test:
Launch SMM in CET enabled/disabled environment (DXE) - no impact to DXE
The test case can be found at
https://github.com/jyao1/SecurityEx/tree/master/ControlFlowPkg
2) test ovmf (both IA32 and X64 SMM, CET disabled only)
test OvmfIa32/Ovmf3264, with -D SMM_REQUIRE.
qemu-system-x86_64.exe -machine q35,smm=on -smp 4
-serial file:serial.log
-drive if=pflash,format=raw,unit=0,file=OVMF_CODE.fd,readonly=on
-drive if=pflash,format=raw,unit=1,file=OVMF_VARS.fd
QEMU emulator version 3.1.0 (v3.1.0-11736-g7a30e7adb0-dirty)
3) not tested
IA32 CET enabled platform
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yao Jiewen <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1091
Previously, when compiling NASM source files, BaseTools did not support
including files outside of the NASM source file directory. As a result, we
duplicated multiple copies of "StuffRsb.inc" files in UefiCpuPkg. Those
INC files contain the common logic to stuff the Return Stack Buffer and
are identical.
After the fix of BZ 1085:
https://bugzilla.tianocore.org/show_bug.cgi?id=1085
The above support was introduced.
Thus, this commit will merge all the StuffRsb.inc files in UefiCpuPkg into
one file. The merged file will be named 'StuffRsbNasm.inc' and be placed
under folder UefiCpuPkg/Include/.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
When static paging is disabled, page table for below 4GB is created
and page table for above 4GB is created dynamically in page fault
handler.
Today's implementation only allow SMM access-out to below types of
memory address no matter static paging is enabled or not:
1. Reserved, run time and ACPI NVS type
2. MMIO
But certain platform feature like RAS may need to access other types
of memory from SMM. Today's code blocks these platforms.
This patch simplifies the policy to only block when static paging
is used so that the static paging can be disabled in these platforms
to meet their SMM access-out need.
Setting PcdCpuSmmStaticPageTable to FALSE can disable the static
paging.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Index is initialized to MAX_UINT16 as default failure value, which
is what the ASSERT is supposed to test for. The ASSERT condition
however can never return FALSE for INT16 != int, as due to
Integer Promotion[1], Index is converted to int, which can never
result in -1.
Furthermore, Index is used as a for loop index variable inbetween its
initialization and the ASSERT, so the value is unconditionally
overwritten too.
Fix the ASSERT check to compare Index to its upper boundary, which it
will be equal to if the loop was not broken out of on success.
[1] ISO/IEC 9899:2011, 6.5.9.4
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1191
Before commit e21e355e2c, jmp _SmiHandler
is commented. And below code, ASM_PFX(CpuSmmDebugEntry) is moved into rax,
then call it. But, this code doesn't work in XCODE5 tool chain. Because XCODE5
doesn't generated the absolute address in the EFI image. So, rax stores the
relative address. Once this logic is moved to another place, it will not work.
; jmp _SmiHandler ; instruction is not needed
...
mov rax, ASM_PFX(CpuSmmDebugEntry)
call rax
Commit e21e355e2c is to support XCODE5.
One tricky way is selected to fix it. Although SmiEntry logic is copied to
another place and run, but here jmp _SmiHandler is enabled to jmp the original
code place, then call ASM_PFX(CpuSmmDebugEntry) with the relative address.
mov rax, strict qword 0 ; mov rax, _SmiHandler
_SmiHandlerAbsAddr:
jmp rax
...
call ASM_PFX(CpuSmmDebugEntry)
Now, BZ 1191 raises the issue that SmiHandler should run in the copied address,
can't run in the common address. So, jmp _SmiHandler is required to be removed,
the code is kept to run in copied address. And, the relative address is
requried to be fixed up to the absolute address. The necessary changes should
not affect the behavior of platforms that already consume PiSmmCpuDxeSmm.
OVMF SMM boot to shell with VS2017, GCC5 and XCODE5 tool chain has been verified.
...
mov rax, strict qword 0 ; call ASM_PFX(CpuSmmDebugEntry)
CpuSmmDebugEntryAbsAddr:
call rax
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Since SMM profile feature has already implemented non-stop mode if #PF
occurred, this patch just makes use of the existing implementation to
accommodate heap guard and NULL pointer detection feature.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1093
Return Stack Buffer (RSB) is used to predict the target of RET
instructions. When the RSB underflows, some processors may fall back to
using branch predictors. This might impact software using the retpoline
mitigation strategy on those processors.
This commit will add RSB stuffing logic before returning from SMM (the RSM
instruction) to avoid interfering with non-SMM usage of the retpoline
technique.
After the stuffing, RSB entries will contain a trap like:
@SpecTrap:
pause
lfence
jmp @SpecTrap
A more detailed explanation of the purpose of commit is under the
'Branch target injection mitigation' section of the below link:
https://software.intel.com/security-software-guidance/insights/host-firmware-speculative-execution-side-channel-mitigation
Please note that this commit requires further actions (BZ 1091) to remove
the duplicated 'StuffRsb.inc' files and merge them into one under a
UefiCpuPkg package-level directory (such as UefiCpuPkg/Include/).
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1091
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
1. Do not use tab characters
2. No trailing white space in one line
3. All files must end with CRLF
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
NASM introduced FXSAVE / FXRSTOR support in commit 900fa5b26b8f ("NASM
0.98p3-hpa", 2002-04-30), which commit stands for the nasm-0.98p3-hpa
release.
NASM introduced FXSAVE64 / FXRSTOR64 support in commit 3a014348ca15
("insns: add FXSAVE64/FXRSTOR64, drop np prefix", 2010-07-07), which was
part of the "nasm-2.09" release.
Edk2 requires nasm-2.10 or later for use with the GCC toolchain family,
and nasm-2.12.01 or later for use with all other toolchain families.
Replace the binary encoding of the FXSAVE(64)/FXRSTOR(64) instructions
with mnemonics.
I verified that the "Ia32/SmiException.obj", "X64/SmiEntry.obj" and
"X64/SmiException.obj" files are rebuilt after this patch, without any
change in content.
This patch removes the last instructions encoded with DBs from
PiSmmCpuDxeSmm.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
(1) SmmRelocationSemaphoreComplete32() runs in 32-bit mode, so wrap it in
a (BITS 32 ... BITS 64) bracket.
(2) SmmRelocationSemaphoreComplete32() currently compiles to:
> 000002AE C6050000000001 mov byte [dword 0x0],0x1
> 000002B5 FF2500000000 jmp dword [dword 0x0]
where the first instruction is patched with the contents of
"mRebasedFlag" (so that (*mRebasedFlag) is set to 1), and the second
instruction is patched with the address of
"mSmmRelocationOriginalAddress" (so that we jump to
"mSmmRelocationOriginalAddress").
In its current form the first instruction could not be patched with
PatchInstructionX86(), given that the operand to patch is not encoded
in the trailing bytes of the instruction. Therefore, adopt an
EAX-based version, inspired by both the IA32 and X64 variants of
SmmRelocationSemaphoreComplete():
> 000002AE 50 push eax
> 000002AF B800000000 mov eax,0x0
> 000002B4 C60001 mov byte [eax],0x1
> 000002B7 58 pop eax
> 000002B8 FF2500000000 jmp dword [dword 0x0]
Here both instructions can be patched with PatchInstructionX86(), and
the DBs can be replaced with native NASM syntax.
(3) Turn the "mRebasedFlagAddr32" and "mSmmRelocationOriginalAddressPtr32"
variables into markers that suit PatchInstructionX86().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmmInitStack" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".
The size of the patched source operand is (sizeof (UINTN)).
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its
PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can
simply use the NASM syntax for the following Mixed-Size Jump:
> jmp PROTECT_MODE_CS : dword @32bit
The generated object code for the instruction is unchanged:
> 00000182 66EA5A0000000800 jmp dword 0x8:0x5a
(The NASM manual explains that putting the DWORD prefix after the colon
":" reflects the intent better, since it is the offset that is a DWORD.
Thus, that's what I used. However, both syntaxes are interchangeable,
hence the ndisasm output.)
The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr";
however that's accidental, not inherent:
- Bring LONG_MODE_CODE_SEGMENT from
"UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as
LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32
version as PROTECT_MODE_CS earlier.
- Apply the NASM-native Mixed-Size Jump syntax again, but jump to the
fixed zero offset in LONG_MODE_CS. This will produce no relocation
record at all. Add a label after the instruction.
- Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards
from the label. Because we modify the DWORD offset with a DWORD access,
the segment selector is unharmed in the instruction, and we need not set
it from PiCpuSmmEntry().
According to "objdump --reloc", the X64 version undergoes only the
following relocations, after this patch:
> RELOCATION RECORDS FOR [.text]:
> OFFSET TYPE VALUE
> 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004
> 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004
> 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004
Therefore the patch does not regress
<https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool
chain for UefiCpuPkg with nasm source code").
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Like "gSmmCr4" in the previous patch, "gSmmCr0" is not only used for
machine code patching, but also as a means to communicate the initial CR0
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr0Value" instruction's binary
representation are utilized as normal data too.
In order to get rid of the DB for "mov eax, Cr0Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr0" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr0" variable to
"gPatchSmmCr0" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr0".
This lets us remove the binary (DB) encoding of "mov eax, Cr0Value" in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Unlike "gSmmCr3" in the previous patch, "gSmmCr4" is not only used for
machine code patching, but also as a means to communicate the initial CR4
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr4Value" instruction's binary
representation are utilized as normal data too.
In order to get rid of the DB for "mov eax, Cr4Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr4" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr4" variable to
"gPatchSmmCr4" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr4".
This lets us remove the binary (DB) encoding of "mov eax, Cr4Value" in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmmCr3" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
(This patch is the 64-bit variant of commit e75ee97224,
"UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()",
2018-01-31.)
The SmmStartup() function executes in SMM, which is very similar to real
mode. Add "BITS 16" before it and "BITS 64" after it (just before the
@LongMode label).
Remove the manual 0x66 operand-size override prefixes, for selecting
32-bit operands -- the sizes of our operands trigger NASM to insert the
prefixes automatically in almost every spot. The one place where we have
to add it back manually is the LGDT instruction. In the LGDT instruction
we also replace the binary 0x2E prefix with the normal NASM syntax for CS
segment override.
The stores to the Control Registers were always 32-bit wide; the source
code only used RAX as source operand because it generated the expected
object code (with NASM compiling the source as if in BITS 64). With BITS
16 added, we can use the actual register width in the source operands
(EAX).
This patch causes NASM to generate byte-identical object code (determined
by disassembling both the pre-patch and post-patch versions, and comparing
the listings), except:
> @@ -231,7 +231,7 @@
> 000001D2 6689D3 mov ebx,edx
> 000001D5 66B800000000 mov eax,0x0
> 000001DB 0F22D8 mov cr3,eax
> -000001DE 662E670F0155F6 o32 lgdt [cs:ebp-0xa]
> +000001DE 2E66670F0155F6 o32 lgdt [cs:ebp-0xa]
> 000001E5 66B800000000 mov eax,0x0
> 000001EB 80CC02 or ah,0x2
> 000001EE 0F22E0 mov cr4,eax
The only difference is the prefix list order, it changes from:
- 0x66, 0x2E, 0x67
to
- 0x2E, 0x66, 0x67
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
"mXdSupported" is a global BOOLEAN variable, initialized to TRUE. The
CheckFeatureSupported() function is executed on all processors (not
concurrently though), called from SmmInitHandler(). If XD support is found
to be missing on any CPU, then "mXdSupported" is set to FALSE, and further
processors omit the check. Afterwards, "mXdSupported" is read by several
assembly and C code locations.
The tricky part is *where* "mXdSupported" is allocated (defined):
- Before commit 717fb60443 ("UefiCpuPkg/PiSmmCpuDxeSmm: Add paging
protection.", 2016-11-17), it used to be a normal global variable,
defined (allocated) in "SmmProfile.c".
- With said commit, we moved the definition (allocation) of "mXdSupported"
into "SmiEntry.nasm". The variable was defined over the last byte of a
"mov al, 1" instruction, so that setting it to FALSE in
CheckFeatureSupported() would patch the instruction to "mov al, 0". The
subsequent conditional jump would change behavior, plus all further read
references to "mXdSupported" (in C and assembly code) would read back
the source (imm8) operand of the patched MOV instruction as data.
This trick required that the MOV instruction be encoded with DB.
In order to get rid of the DB, we have to split both roles: we need a
label for the code patching, and "mXdSupported" has to be defined
(allocated) independently of the code patching. Of course, their values
must always remain in sync.
(1) Reinstate the "mXdSupported" definition and initialization in
"SmmProfile.c" from before commit 717fb60443. Change the assembly
language definition ("global") to a declaration ("extern").
(2) Define the "gPatchXdSupported" label (type X86_ASSEMBLY_PATCH_LABEL)
in "SmiEntry.nasm", and add the C-language declaration to
"SmmProfileInternal.h". Replace the DB with the MOV mnemonic (keeping
the imm8 source operand with value 1).
(3) In CheckFeatureSupported(), whenever "mXdSupported" is set to FALSE,
patch the assembly code in sync, with PatchInstructionX86().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmiCr3" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmiEntry.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmiStack" so that its association with
PatchInstructionX86() is clear from the declaration. Also change its type
to X86_ASSEMBLY_PATCH_LABEL.
Unlike "gSmbase" in the previous patch, "gSmiStack"'s patched value is
also de-referenced by C code (in other words, it is read back after
patching): the InstallSmiHandler() function stores "CpuIndex" to the given
CPU's SMI stack through "gSmiStack". Introduce the local variable
"CpuSmiStack" in InstallSmiHandler() for calculating the stack location
separately, then use this variable for both patching into the assembly
code, and for storing "CpuIndex" through it.
It's assumed that "volatile" stood in the declaration of "gSmiStack"
because we used to read "gSmiStack" back for de-referencing; with that use
gone, we can remove "volatile" too. (Note that the *target* of the pointer
was never volatile-qualified.)
Finally, replace the binary (DB) encoding of "mov esp, imm32" in
"SmiEntry.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmbase" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmiEntry.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory
of EfiBootServicesCode, EfiConventionalMemory, the BIOS will hang at a page
fault exception triggered by PiSmmCpuDxeSmm.
The root cause is that PiSmmCpuDxeSmm will access default SMM RAM starting
at 0x30000 which is marked as non-executable, but NX feature was not
enabled during SMM initialization. Accessing memory which has invalid
attributes set will cause page fault exception. This patch fixes it by
checking NX capability in cpuid and enable NXE in EFER MSR if it's
available.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=849
In V2, use "mov rax, strict qword 0" to replace the hard code db.
1. Use lea instruction to get the address instead of mov instruction.
2. Use the dummy address as jmp destination, and add the logic to fix up
the address to the absolute address at boot time.
3. On MpFuncs.nasm, use ExchangeInfo to record InitializeFloatingPointUnits.
This way is same to MpInitLib.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
When StackGuard is enabled on IA32, the #double fault exception
is reported instead of #page fault.
This issue does not exist on X64, or IA32 without StackGuard.
The fix at e4435f710c was incomplete.
It is because AllocateCodePages() is used to allocate buffer for
GDT and TSS, the code pages will be set to RO in SetMemMapAttributes().
But IA32 Stack Guard need use task switch to switch stack that need
write GDT and TSS, so AllocateCodePages() could not be used.
This patch uses AllocatePages() instead of AllocateCodePages() to
allocate buffer for GDT and TSS if StackGuard is enabled on IA32.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
SMM profile and static paging could not be enabled at the same time,
this patch is to add check and comments to make sure it.
Similar comments are also added for the case of static paging and
heap guard for SMM.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Only DumpCpuContext in error case, otherwise there will be too many
debug messages from DumpCpuContext() when SmmProfile feature is enabled
by setting PcdCpuSmmProfileEnable to TRUE. Those debug messages are not
needed for SmmProfile feature as it will record those information to
buffer for further dump.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Heap guard makes use of paging mechanism to implement its functionality. But
there's no protocol or library available to change page attribute in SMM mode.
A new protocol gEdkiiSmmMemoryAttributeProtocolGuid is introduced to make it
happen. This protocol provide three interfaces
struct _EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL {
EDKII_SMM_GET_MEMORY_ATTRIBUTES GetMemoryAttributes;
EDKII_SMM_SET_MEMORY_ATTRIBUTES SetMemoryAttributes;
EDKII_SMM_CLEAR_MEMORY_ATTRIBUTES ClearMemoryAttributes;
};
Since heap guard feature need to update page attributes. The page table
should not set to be read-only if heap guard feature is enabled for SMM
mode. Otherwise this feature cannot work.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
The mechanism behind is the same as NULL pointer detection enabled in EDK-II
core. SMM has its own page table and we have to disable page 0 again in SMM
mode.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>