NASM introduced FXSAVE / FXRSTOR support in commit 900fa5b26b8f ("NASM
0.98p3-hpa", 2002-04-30), which commit stands for the nasm-0.98p3-hpa
release.
NASM introduced FXSAVE64 / FXRSTOR64 support in commit 3a014348ca15
("insns: add FXSAVE64/FXRSTOR64, drop np prefix", 2010-07-07), which was
part of the "nasm-2.09" release.
Edk2 requires nasm-2.10 or later for use with the GCC toolchain family,
and nasm-2.12.01 or later for use with all other toolchain families.
Replace the binary encoding of the FXSAVE(64)/FXRSTOR(64) instructions
with mnemonics.
I verified that the "Ia32/SmiException.obj", "X64/SmiEntry.obj" and
"X64/SmiException.obj" files are rebuilt after this patch, without any
change in content.
This patch removes the last instructions encoded with DBs from
PiSmmCpuDxeSmm.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
(1) SmmRelocationSemaphoreComplete32() runs in 32-bit mode, so wrap it in
a (BITS 32 ... BITS 64) bracket.
(2) SmmRelocationSemaphoreComplete32() currently compiles to:
> 000002AE C6050000000001 mov byte [dword 0x0],0x1
> 000002B5 FF2500000000 jmp dword [dword 0x0]
where the first instruction is patched with the contents of
"mRebasedFlag" (so that (*mRebasedFlag) is set to 1), and the second
instruction is patched with the address of
"mSmmRelocationOriginalAddress" (so that we jump to
"mSmmRelocationOriginalAddress").
In its current form the first instruction could not be patched with
PatchInstructionX86(), given that the operand to patch is not encoded
in the trailing bytes of the instruction. Therefore, adopt an
EAX-based version, inspired by both the IA32 and X64 variants of
SmmRelocationSemaphoreComplete():
> 000002AE 50 push eax
> 000002AF B800000000 mov eax,0x0
> 000002B4 C60001 mov byte [eax],0x1
> 000002B7 58 pop eax
> 000002B8 FF2500000000 jmp dword [dword 0x0]
Here both instructions can be patched with PatchInstructionX86(), and
the DBs can be replaced with native NASM syntax.
(3) Turn the "mRebasedFlagAddr32" and "mSmmRelocationOriginalAddressPtr32"
variables into markers that suit PatchInstructionX86().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmmInitStack" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".
The size of the patched source operand is (sizeof (UINTN)).
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its
PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can
simply use the NASM syntax for the following Mixed-Size Jump:
> jmp PROTECT_MODE_CS : dword @32bit
The generated object code for the instruction is unchanged:
> 00000182 66EA5A0000000800 jmp dword 0x8:0x5a
(The NASM manual explains that putting the DWORD prefix after the colon
":" reflects the intent better, since it is the offset that is a DWORD.
Thus, that's what I used. However, both syntaxes are interchangeable,
hence the ndisasm output.)
The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr";
however that's accidental, not inherent:
- Bring LONG_MODE_CODE_SEGMENT from
"UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as
LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32
version as PROTECT_MODE_CS earlier.
- Apply the NASM-native Mixed-Size Jump syntax again, but jump to the
fixed zero offset in LONG_MODE_CS. This will produce no relocation
record at all. Add a label after the instruction.
- Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards
from the label. Because we modify the DWORD offset with a DWORD access,
the segment selector is unharmed in the instruction, and we need not set
it from PiCpuSmmEntry().
According to "objdump --reloc", the X64 version undergoes only the
following relocations, after this patch:
> RELOCATION RECORDS FOR [.text]:
> OFFSET TYPE VALUE
> 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004
> 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004
> 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004
Therefore the patch does not regress
<https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool
chain for UefiCpuPkg with nasm source code").
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Like "gSmmCr4" in the previous patch, "gSmmCr0" is not only used for
machine code patching, but also as a means to communicate the initial CR0
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr0Value" instruction's binary
representation are utilized as normal data too.
In order to get rid of the DB for "mov eax, Cr0Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr0" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr0" variable to
"gPatchSmmCr0" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr0".
This lets us remove the binary (DB) encoding of "mov eax, Cr0Value" in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Unlike "gSmmCr3" in the previous patch, "gSmmCr4" is not only used for
machine code patching, but also as a means to communicate the initial CR4
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr4Value" instruction's binary
representation are utilized as normal data too.
In order to get rid of the DB for "mov eax, Cr4Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr4" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr4" variable to
"gPatchSmmCr4" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr4".
This lets us remove the binary (DB) encoding of "mov eax, Cr4Value" in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmmCr3" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
(This patch is the 64-bit variant of commit e75ee97224,
"UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()",
2018-01-31.)
The SmmStartup() function executes in SMM, which is very similar to real
mode. Add "BITS 16" before it and "BITS 64" after it (just before the
@LongMode label).
Remove the manual 0x66 operand-size override prefixes, for selecting
32-bit operands -- the sizes of our operands trigger NASM to insert the
prefixes automatically in almost every spot. The one place where we have
to add it back manually is the LGDT instruction. In the LGDT instruction
we also replace the binary 0x2E prefix with the normal NASM syntax for CS
segment override.
The stores to the Control Registers were always 32-bit wide; the source
code only used RAX as source operand because it generated the expected
object code (with NASM compiling the source as if in BITS 64). With BITS
16 added, we can use the actual register width in the source operands
(EAX).
This patch causes NASM to generate byte-identical object code (determined
by disassembling both the pre-patch and post-patch versions, and comparing
the listings), except:
> @@ -231,7 +231,7 @@
> 000001D2 6689D3 mov ebx,edx
> 000001D5 66B800000000 mov eax,0x0
> 000001DB 0F22D8 mov cr3,eax
> -000001DE 662E670F0155F6 o32 lgdt [cs:ebp-0xa]
> +000001DE 2E66670F0155F6 o32 lgdt [cs:ebp-0xa]
> 000001E5 66B800000000 mov eax,0x0
> 000001EB 80CC02 or ah,0x2
> 000001EE 0F22E0 mov cr4,eax
The only difference is the prefix list order, it changes from:
- 0x66, 0x2E, 0x67
to
- 0x2E, 0x66, 0x67
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
"mXdSupported" is a global BOOLEAN variable, initialized to TRUE. The
CheckFeatureSupported() function is executed on all processors (not
concurrently though), called from SmmInitHandler(). If XD support is found
to be missing on any CPU, then "mXdSupported" is set to FALSE, and further
processors omit the check. Afterwards, "mXdSupported" is read by several
assembly and C code locations.
The tricky part is *where* "mXdSupported" is allocated (defined):
- Before commit 717fb60443 ("UefiCpuPkg/PiSmmCpuDxeSmm: Add paging
protection.", 2016-11-17), it used to be a normal global variable,
defined (allocated) in "SmmProfile.c".
- With said commit, we moved the definition (allocation) of "mXdSupported"
into "SmiEntry.nasm". The variable was defined over the last byte of a
"mov al, 1" instruction, so that setting it to FALSE in
CheckFeatureSupported() would patch the instruction to "mov al, 0". The
subsequent conditional jump would change behavior, plus all further read
references to "mXdSupported" (in C and assembly code) would read back
the source (imm8) operand of the patched MOV instruction as data.
This trick required that the MOV instruction be encoded with DB.
In order to get rid of the DB, we have to split both roles: we need a
label for the code patching, and "mXdSupported" has to be defined
(allocated) independently of the code patching. Of course, their values
must always remain in sync.
(1) Reinstate the "mXdSupported" definition and initialization in
"SmmProfile.c" from before commit 717fb60443. Change the assembly
language definition ("global") to a declaration ("extern").
(2) Define the "gPatchXdSupported" label (type X86_ASSEMBLY_PATCH_LABEL)
in "SmiEntry.nasm", and add the C-language declaration to
"SmmProfileInternal.h". Replace the DB with the MOV mnemonic (keeping
the imm8 source operand with value 1).
(3) In CheckFeatureSupported(), whenever "mXdSupported" is set to FALSE,
patch the assembly code in sync, with PatchInstructionX86().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmiCr3" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmiEntry.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmiStack" so that its association with
PatchInstructionX86() is clear from the declaration. Also change its type
to X86_ASSEMBLY_PATCH_LABEL.
Unlike "gSmbase" in the previous patch, "gSmiStack"'s patched value is
also de-referenced by C code (in other words, it is read back after
patching): the InstallSmiHandler() function stores "CpuIndex" to the given
CPU's SMI stack through "gSmiStack". Introduce the local variable
"CpuSmiStack" in InstallSmiHandler() for calculating the stack location
separately, then use this variable for both patching into the assembly
code, and for storing "CpuIndex" through it.
It's assumed that "volatile" stood in the declaration of "gSmiStack"
because we used to read "gSmiStack" back for de-referencing; with that use
gone, we can remove "volatile" too. (Note that the *target* of the pointer
was never volatile-qualified.)
Finally, replace the binary (DB) encoding of "mov esp, imm32" in
"SmiEntry.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmbase" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmiEntry.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory
of EfiBootServicesCode, EfiConventionalMemory, the BIOS will hang at a page
fault exception triggered by PiSmmCpuDxeSmm.
The root cause is that PiSmmCpuDxeSmm will access default SMM RAM starting
at 0x30000 which is marked as non-executable, but NX feature was not
enabled during SMM initialization. Accessing memory which has invalid
attributes set will cause page fault exception. This patch fixes it by
checking NX capability in cpuid and enable NXE in EFER MSR if it's
available.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=849
In V2, use "mov rax, strict qword 0" to replace the hard code db.
1. Use lea instruction to get the address instead of mov instruction.
2. Use the dummy address as jmp destination, and add the logic to fix up
the address to the absolute address at boot time.
3. On MpFuncs.nasm, use ExchangeInfo to record InitializeFloatingPointUnits.
This way is same to MpInitLib.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
When StackGuard is enabled on IA32, the #double fault exception
is reported instead of #page fault.
This issue does not exist on X64, or IA32 without StackGuard.
The fix at e4435f710c was incomplete.
It is because AllocateCodePages() is used to allocate buffer for
GDT and TSS, the code pages will be set to RO in SetMemMapAttributes().
But IA32 Stack Guard need use task switch to switch stack that need
write GDT and TSS, so AllocateCodePages() could not be used.
This patch uses AllocatePages() instead of AllocateCodePages() to
allocate buffer for GDT and TSS if StackGuard is enabled on IA32.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
SMM profile and static paging could not be enabled at the same time,
this patch is to add check and comments to make sure it.
Similar comments are also added for the case of static paging and
heap guard for SMM.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Only DumpCpuContext in error case, otherwise there will be too many
debug messages from DumpCpuContext() when SmmProfile feature is enabled
by setting PcdCpuSmmProfileEnable to TRUE. Those debug messages are not
needed for SmmProfile feature as it will record those information to
buffer for further dump.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Heap guard makes use of paging mechanism to implement its functionality. But
there's no protocol or library available to change page attribute in SMM mode.
A new protocol gEdkiiSmmMemoryAttributeProtocolGuid is introduced to make it
happen. This protocol provide three interfaces
struct _EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL {
EDKII_SMM_GET_MEMORY_ATTRIBUTES GetMemoryAttributes;
EDKII_SMM_SET_MEMORY_ATTRIBUTES SetMemoryAttributes;
EDKII_SMM_CLEAR_MEMORY_ATTRIBUTES ClearMemoryAttributes;
};
Since heap guard feature need to update page attributes. The page table
should not set to be read-only if heap guard feature is enabled for SMM
mode. Otherwise this feature cannot work.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
The mechanism behind is the same as NULL pointer detection enabled in EDK-II
core. SMM has its own page table and we have to disable page 0 again in SMM
mode.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Originally (before 714c260301),
mPhysicalAddressBits was only defined in X64 PageTbl.c, after
714c260301, mPhysicalAddressBits is
also defined in Ia32 PageTbl.c, then mPhysicalAddressBits is used in
ConvertMemoryPageAttributes() for address check.
This patch is to centralize mPhysicalAddressBits definition to
PiSmmCpuDxeSmm.c from Ia32 and X64 PageTbl.c.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Consuming PeCoffSerachImageBase() from PeCoffGetEntrypointLib and consuming
DumpCpuContext() from CpuExceptionHandlerLib to replace its own implementation.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
This PCD holds the address mask for page table entries when memory
encryption is enabled on AMD processors supporting the Secure Encrypted
Virtualization (SEV) feature.
The mask is applied when page tables entriees are created or modified.
CC: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
This patch sets the normal OS buffer EfiLoaderCode/Data,
EfiBootServicesCode/Data, EfiConventionalMemory, EfiACPIReclaimMemory
to be not present after SmmReadyToLock.
To access these region in OS runtime phase is not a good solution.
Previously, we did similar check in SmmMemLib to help SMI handler
do the check. But if SMI handler forgets the check, it can still
access these OS region and bring risk.
So here we enforce the policy to prevent it happening.
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=277
The MTRR field was removed from PROCESS_SMM_DESCRIPTOR
structure in commit:
26ab5ac362
However, the references to the MTRR field in assembly
files were not removed. Remove the extern reference
to gSmiMtrr and set the Reserved14 field
of PROCESS_SMM_DESCRIPTOR to 0.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
This patch fixes https://bugzilla.tianocore.org/show_bug.cgi?id=246
Previously, when SMM exception happens after EndOfDxe,
with StackGuard enabled on IA32, the #double fault exception
is reported instead of #page fault.
Root cause is below:
Current EDKII SMM page protection will lock GDT.
If IA32 stack guard is enabled, the page fault handler will do task switch.
This task switch need write busy flag in GDT, and write TSS.
However, the GDT and TSS is locked at that time, so the
double fault happens.
We decide to not lock GDT for IA32 StackGuard enabled.
This issue does not exist on X64, or IA32 without StackGuard.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
This patch fixes the first part of
https://bugzilla.tianocore.org/show_bug.cgi?id=242
Previously, when SMM exception happens, "stack overflow" is misreported.
This patch checked the PF address to see it is stack overflow, or
it is caused by SMM page protection.
It dumps exception data, PF address and the module trigger the issue.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Once platform selects the incorrect instance, the caller could know it from
return status and ASSERT().
Cc: Feng Tian <feng.tian@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Update TransferApToSafeState() use UINTN params to reduce the
number of type casts required in these calls. Also change
the NumberToFinish parameter from UINT32* to UINTN
NumberToFinishAddress to resolve issues with conversion from
a volatile pointer to a non-volatile pointer. The assembly
code that receives the NumberToFinishAddress value must treat
that memory location as a volatile to track the number of APs.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
PiSmmCpuDxeSmm consumes SmmAttributesTable and setup page table:
1) Code region is marked as read-only and Data region is non-executable,
if the PE image is 4K aligned.
2) Important data structure is set to RO, such as GDT/IDT.
3) SmmSaveState is set to non-executable,
and SmmEntrypoint is set to read-only.
4) If static page is supported, page table is read-only.
We use page table to protect other components, and itself.
If we use dynamic paging, we can still provide *partial* protection.
And hope page table is not modified by other components.
The XD enabling code is moved to SmiEntry to let NX take effect.
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
We will put APs into hlt-loop in safe code. But we decrease mNumberToFinish
before APs enter into the safe code. Paolo pointed out this gap.
This patch is to move mNumberToFinish decreasing to the safe code. It could
make sure BSP could wait for all APs are running in safe code.
https://bugzilla.tianocore.org/show_bug.cgi?id=216
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
On S3 path, we may transfer to long mode (if DXE is long mode) to restore CPU
contexts with CR3 = SmmS3Cr3 (in SMM). AP will execute hlt-loop after CPU
contexts restoration. Once one NMI or SMI happens, APs may exit from hlt state
and execute the instruction after HLT instruction. If APs are running on long
mode, page table is required to fetch the instruction. However, CR3 pointer to
page table in SMM. APs will crash.
This fix is to disable long mode on APs and transfer to 32bit protected mode to
execute hlt-loop. Then CR3 and page table will no longer be required.
https://bugzilla.tianocore.org/show_bug.cgi?id=216
Reported-by: Laszlo Ersek <lersek@redhat.com>
Analyzed-by: Paolo Bonzini <pbonzini@redhat.com>
Analyzed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
On S3 path, we will wake up APs to restore CPU context in PiSmmCpuDxeSmm
driver. However, we place AP in hlt-loop under 1MB space borrowed after CPU
restoring CPU contexts.
In case, one NMI or SMI happens, APs may exit from hlt state and execute the
instruction after HLT instruction. But the code under 1MB is no longer safe at
that time.
This fix is to allocate one ACPI NVS range to place the AP hlt-loop code. When
CPU finished restoration CPU contexts, AP will execute in this ACPI NVS range.
https://bugzilla.tianocore.org/show_bug.cgi?id=216
v2:
1. Make stack alignment per Laszlo's comment.
2. Trim whitespace at end of end.
3. Update year mark in file header.
Reported-by: Laszlo Ersek <lersek@redhat.com>
Analyzed-by: Paolo Bonzini <pbonzini@redhat.com>
Analyzed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
A number of code locations use
ASSERT_EFI_ERROR (BooleanExpression)
instead of
ASSERT (BooleanExpression)
Fix them.
Cc: Jeff Fan <jeff.fan@intel.com>
Reported-by: Gerd Hoffmann <kraxel@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Giri P Mudusuru <giri.p.mudusuru@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Use 16bit and 32bit assembly code to replace hard code db.
In V2: add 0x67 prefixes to far jumps
Without the a32 modifier under FLAT32_JUMP, and the a16 modifier under
LONG_JUMP, nasm doesn't generate the 0x67 prefixes, and the far jumps
don't work. (For the former, KVM returns an emulation failure. For the
latter, KVM performs a triple fault (guest reboot).) By forcing the 0x67
prefixes we end up with the same machine code as the one open-coded in
"MpFuncs.asm".
This bug breaks S3 resume in the Ia32X64 + SMM_REQUIRE build of OVMF.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
X64/MpFuncs.asm to X64/MpFuncs.nasm
And, manually update it to pass build.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Update all global semaphores to the ones in allocated aligned
semaphores buffer.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
So that we can use write-protection for code later.
It is REPOST.
It includes suggestion from Michael Kinney <michael.d.kinney@intel.com>:
- "For IA32 assembly, can we combine into a single OR instruction that
sets both page enable and WP?"
- "For X64, does it make sense to use single OR instruction instead of 2
BTS instructions as well?"
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Cc: "Fan, Jeff" <jeff.fan@intel.com>
Cc: "Kinney, Michael D" <michael.d.kinney@intel.com>
Cc: "Laszlo Ersek" <lersek@redhat.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19068 6f19259b-4bc3-4df7-8a09-765794883524
So that we can use write-protection for code later.
This is REPOST.
It includes the bug fix from "Paolo Bonzini" <pbonzini@redhat.com>:
Title: fix generation of 32-bit PAE page tables
"Bits 1 and 2 are reserved in 32-bit PAE Page Directory Pointer Table
Entries (PDPTEs); see Table 4-8 in the SDM. With VMX extended page
tables, the processor notices and fails the VM entry as soon as CR0.PG
is set to 1."
And thanks "Laszlo Ersek" <lersek@redhat.com> to validate the fix.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Signed-off-by: "Paolo Bonzini" <pbonzini@redhat.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Cc: "Fan, Jeff" <jeff.fan@intel.com>
Cc: "Kinney, Michael D" <michael.d.kinney@intel.com>
Cc: "Laszlo Ersek" <lersek@redhat.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19067 6f19259b-4bc3-4df7-8a09-765794883524
Always set RW+P bit for page table by default.
So that we can use write-protection for code later.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Kinney, Michael D" <michael.d.kinney@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18960 6f19259b-4bc3-4df7-8a09-765794883524
SmmDebug feature is implemented in ASM, which is not easy to maintain.
So we move it to C function.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Kinney, Michael D" <michael.d.kinney@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18946 6f19259b-4bc3-4df7-8a09-765794883524
Move Gdt initialization from InitializeMpServiceData() to CPU Arch specific function.
We create SmmFuncsArch.c for hold CPU specific function, so that
EFI_IMAGE_MACHINE_TYPE_SUPPORTED(EFI_IMAGE_MACHINE_X64) can be removed.
For IA32 version, we always allocate new page for GDT entry, for easy maintenance.
For X64 version, we fixed TssBase in GDT entry to make sure TSS data is correct.
Remove TSS fixup for GDT in ASM file.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Fan, Jeff" <jeff.fan@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18937 6f19259b-4bc3-4df7-8a09-765794883524
TSS segment should use (SIZE - 1) as limit, and do not set G bit (highest bit of LimitHigh) because limit means byte count.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Fan, Jeff" <jeff.fan@intel.com>
Reviewed-by: "Kinney, Michael D" <michael.d.kinney@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18935 6f19259b-4bc3-4df7-8a09-765794883524