Verification microcode signature is one enhancement and not one requirement from
IA32 SDM. This update is just to dump debug message instead of ASSERT() if the
updated microcode signature does not match the loaded microcode signature.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Giri P Mudusuru <giri.p.mudusuru@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Giri P Mudusuru <giri.p.mudusuru@intel.com>
Actually, there is only one microcode region in platform. If microcode has been
loaded, its signature will not be zero and should be loaded successfully.
We needn't to check microcode region and load microcode again. This update is to
skip checking/loading microcode if current microcode signature is not zero.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Giri P Mudusuru <giri.p.mudusuru@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Giri P Mudusuru <giri.p.mudusuru@intel.com>
- becasue to because
Cc: Jeff Fan <jeff.fan@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Giri P Mudusuru <giri.p.mudusuru@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
A number of code locations use
ASSERT_EFI_ERROR (BooleanExpression)
instead of
ASSERT (BooleanExpression)
Fix them.
Cc: Jeff Fan <jeff.fan@intel.com>
Reported-by: Gerd Hoffmann <kraxel@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Giri P Mudusuru <giri.p.mudusuru@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Use 16bit and 32bit assembly code to replace hard code db.
In V2: add 0x67 prefixes to far jumps
Without the a32 modifier under FLAT32_JUMP, and the a16 modifier under
LONG_JUMP, nasm doesn't generate the 0x67 prefixes, and the far jumps
don't work. (For the former, KVM returns an emulation failure. For the
latter, KVM performs a triple fault (guest reboot).) By forcing the 0x67
prefixes we end up with the same machine code as the one open-coded in
"MpFuncs.asm".
This bug breaks S3 resume in the Ia32X64 + SMM_REQUIRE build of OVMF.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
X64/MpFuncs.asm to X64/MpFuncs.nasm
And, manually update it to pass build.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Use 16bit assembly code to replace hard code db.
In V2:
Add 0x67 prefix to far jump
When we enter protected mode, with the far jump still in big real mode,
the JMP instruction not only needs the 0x66 prefix (for 32-bit operand
size), but also the 0x67 prefix (for 32-bit address size). Use the a32
nasm modifier to enforce this.
This bug breaks S3 resume in the Ia32 + SMM_REQUIRE build of OVMF.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
Ia32/MpFuncs.asm to Ia32/MpFuncs.nasm.
And, manually update it to pass build.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
X64/AsmFuncs.asm to X64/AsmFuncs.nasm.
And, manually add o16 prefix to specify 16bit operation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
Ia32/AsmFuncs.asm to Ia32/AsmFuncs.nasm.
And, manually add o16 prefix to specify 16bit operation.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
X64/ExceptionHandlerAsm.asm to X64/ExceptionHandlerAsm.nasm.
Then, manually update nasm to pass build.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
Ia32/ExceptionHandlerAsm.asm to Ia32/ExceptionHandlerAsm.nasm.
Then, manually update nasm to pass build.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
X64/InitializeFpu.asm to X64/InitializeFpu.nasm.
And, manually add .rdata section.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
Ia32/InitializeFpu.asm to Ia32/InitializeFpu.nasm.
And, manually add .rdata section.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
Ia32/CpuAsm.asm to Ia32/CpuAsm.nasm
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
This patch adds the NORETURN attribute to the function that transfers
to the PEI phase, along with an UNREACHABLE() call at the end to
avoid false warnings.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Currently, if the memory length to be programmed is less than the remaining size
of one Fixed-MTRR supported, RETURN_UNSUPPORTED returned. This is not correct.
This is one regression at 07e8892090 when we
updated ProgramFixedMtrr() to remove the loop of calculating Fixed-MTRR Mask.
This fix will calculate Right offset in Fixed-MTRR beside left offset. It
supports small length (less than remaining size supported by Fixed-MTRR) to be
programmed.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
This Feature PCD causes PiSmmCpuDxe to catch SMM stack overflow at
runtime, logging a clear error message, and entering a CPU dead loop.
Compared to the chaotic and catastrophic consequences of the stack leaking
into, and corrupting, the SMM page table, a stack guard that is enabled by
default is vastly superior.
We should not require sane platforms to explicitly opt in to this
safeguard; instead, we should require platforms that prefer to live
dangerously to opt out of it.
Stack overflow in SMM might even give rise to security vulnerabilities.
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: http://thread.gmane.org/gmane.comp.bios.edk2.devel/12864
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1341733
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
This module could be linked by CpuMpPei driver to handle reserved vector list
and provide spin lock for BSP/APs to prevent dump message corrupted.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Move some global variables location from PeiDxeSmmCpuException.c to
DxeCpuException.c and SmmCpuException.c. And remove some un-used global
vairables.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Rename DxeSmmCpuException.c to PeiDxeSmmCpuException.c that will be used by
PeiCpuExceptionHandlerLib.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Update MSRs semaphores to the ones in allocated aligned semaphores
buffer. If MSRs semaphores is not enough, allocate one page more.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Update each CPU semaphores to the ones in allocated aligned
semaphores buffer.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Allocate each CPU semaphores in allocated aligned semaphores buffer.
And add it into semaphores structure.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Update all global semaphores to the ones in allocated aligned
semaphores buffer.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Move MP sync data initialization in front of the place that initialize
page table, because the page fault spin lock is allocated in
InitializeMpSyncData() while it is initialized in SmmInitPageTable().
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Get semaphores alignment/size requirement and allocate aligned
buffer for all global spin lock and semaphores.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>