It treats the UEFI runtime page with EFI_MEMORY_RO attribute as
invalid SMM communication buffer.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Rename the variable to "gPatchSmmInitStack" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".
The size of the patched source operand is (sizeof (UINTN)).
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its
PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can
simply use the NASM syntax for the following Mixed-Size Jump:
> jmp PROTECT_MODE_CS : dword @32bit
The generated object code for the instruction is unchanged:
> 00000182 66EA5A0000000800 jmp dword 0x8:0x5a
(The NASM manual explains that putting the DWORD prefix after the colon
":" reflects the intent better, since it is the offset that is a DWORD.
Thus, that's what I used. However, both syntaxes are interchangeable,
hence the ndisasm output.)
The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr";
however that's accidental, not inherent:
- Bring LONG_MODE_CODE_SEGMENT from
"UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as
LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32
version as PROTECT_MODE_CS earlier.
- Apply the NASM-native Mixed-Size Jump syntax again, but jump to the
fixed zero offset in LONG_MODE_CS. This will produce no relocation
record at all. Add a label after the instruction.
- Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards
from the label. Because we modify the DWORD offset with a DWORD access,
the segment selector is unharmed in the instruction, and we need not set
it from PiCpuSmmEntry().
According to "objdump --reloc", the X64 version undergoes only the
following relocations, after this patch:
> RELOCATION RECORDS FOR [.text]:
> OFFSET TYPE VALUE
> 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004
> 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004
> 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004
Therefore the patch does not regress
<https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool
chain for UefiCpuPkg with nasm source code").
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Like "gSmmCr4" in the previous patch, "gSmmCr0" is not only used for
machine code patching, but also as a means to communicate the initial CR0
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr0Value" instruction's binary
representation are utilized as normal data too.
In order to get rid of the DB for "mov eax, Cr0Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr0" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr0" variable to
"gPatchSmmCr0" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr0".
This lets us remove the binary (DB) encoding of "mov eax, Cr0Value" in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Unlike "gSmmCr3" in the previous patch, "gSmmCr4" is not only used for
machine code patching, but also as a means to communicate the initial CR4
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr4Value" instruction's binary
representation are utilized as normal data too.
In order to get rid of the DB for "mov eax, Cr4Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr4" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr4" variable to
"gPatchSmmCr4" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr4".
This lets us remove the binary (DB) encoding of "mov eax, Cr4Value" in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmmCr3" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=849
In V2, use "mov rax, strict qword 0" to replace the hard code db.
1. Use lea instruction to get the address instead of mov instruction.
2. Use the dummy address as jmp destination, and add the logic to fix up
the address to the absolute address at boot time.
3. On MpFuncs.nasm, use ExchangeInfo to record InitializeFloatingPointUnits.
This way is same to MpInitLib.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
When StackGuard is enabled on IA32, the #double fault exception
is reported instead of #page fault.
This issue does not exist on X64, or IA32 without StackGuard.
The fix at e4435f710c was incomplete.
It is because AllocateCodePages() is used to allocate buffer for
GDT and TSS, the code pages will be set to RO in SetMemMapAttributes().
But IA32 Stack Guard need use task switch to switch stack that need
write GDT and TSS, so AllocateCodePages() could not be used.
This patch uses AllocatePages() instead of AllocateCodePages() to
allocate buffer for GDT and TSS if StackGuard is enabled on IA32.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Heap guard makes use of paging mechanism to implement its functionality. But
there's no protocol or library available to change page attribute in SMM mode.
A new protocol gEdkiiSmmMemoryAttributeProtocolGuid is introduced to make it
happen. This protocol provide three interfaces
struct _EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL {
EDKII_SMM_GET_MEMORY_ATTRIBUTES GetMemoryAttributes;
EDKII_SMM_SET_MEMORY_ATTRIBUTES SetMemoryAttributes;
EDKII_SMM_CLEAR_MEMORY_ATTRIBUTES ClearMemoryAttributes;
};
Since heap guard feature need to update page attributes. The page table
should not set to be read-only if heap guard feature is enabled for SMM
mode. Otherwise this feature cannot work.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=624 reports
memory protection crash in PiSmmCpuDxeSmm, Ia32 build with
RAM above 4GB (of which 2GB are placed in 64-bit address).
It is because UEFI builds identity mapping page tables,
>4G address is not supported at Ia32 build.
This patch is to get the PhysicalAddressBits that is used
to build in PageTbl.c(Ia32/X64), and use it to check whether
the address is supported or not in ConvertMemoryPageAttributes().
With this patch, the debug messages will be like below.
UefiMemory protection: 0x0 - 0x9F000 Success
UefiMemory protection: 0x100000 - 0x807000 Success
UefiMemory protection: 0x808000 - 0x810000 Success
UefiMemory protection: 0x818000 - 0x820000 Success
UefiMemory protection: 0x1510000 - 0x7B798000 Success
UefiMemory protection: 0x7B79B000 - 0x7E538000 Success
UefiMemory protection: 0x7E539000 - 0x7E545000 Success
UefiMemory protection: 0x7E55A000 - 0x7E61F000 Success
UefiMemory protection: 0x7E62B000 - 0x7F6AB000 Success
UefiMemory protection: 0x7F703000 - 0x7F70B000 Success
UefiMemory protection: 0x7F70F000 - 0x7F778000 Success
UefiMemory protection: 0x100000000 - 0x180000000 Unsupported
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Originally-suggested-by: Jiewen Yao <jiewen.yao@intel.com>
Reported-by: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Consuming PeCoffSerachImageBase() from PeCoffGetEntrypointLib and consuming
DumpCpuContext() from CpuExceptionHandlerLib to replace its own implementation.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
This PCD holds the address mask for page table entries when memory
encryption is enabled on AMD processors supporting the Secure Encrypted
Virtualization (SEV) feature.
The mask is applied when page tables entriees are created or modified.
CC: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
This patch sets the normal OS buffer EfiLoaderCode/Data,
EfiBootServicesCode/Data, EfiConventionalMemory, EfiACPIReclaimMemory
to be not present after SmmReadyToLock.
To access these region in OS runtime phase is not a good solution.
Previously, we did similar check in SmmMemLib to help SMI handler
do the check. But if SMI handler forgets the check, it can still
access these OS region and bring risk.
So here we enforce the policy to prevent it happening.
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
This patch fixes https://bugzilla.tianocore.org/show_bug.cgi?id=246
Previously, when SMM exception happens after EndOfDxe,
with StackGuard enabled on IA32, the #double fault exception
is reported instead of #page fault.
Root cause is below:
Current EDKII SMM page protection will lock GDT.
If IA32 stack guard is enabled, the page fault handler will do task switch.
This task switch need write busy flag in GDT, and write TSS.
However, the GDT and TSS is locked at that time, so the
double fault happens.
We decide to not lock GDT for IA32 StackGuard enabled.
This issue does not exist on X64, or IA32 without StackGuard.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=277
Remove dependency on layout of PROCESSOR_SMM_DESCRIPTOR
everywhere possible. The only exception is the standard
SMI entry handler template that is included with the
PiSmmCpuDxeSmm module. This allows an instance of the
SmmCpuFeaturesLib to provide alternate
PROCESSOR_SMM_DESCRIPTOR structure layouts.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=277
All CPUs use the same MTRR settings. Move MTRR settings
from a field in the PROCESSOR_SMM_DESCRIPTOR structure into
a module global variable.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Update TransferApToSafeState() use UINTN params to reduce the
number of type casts required in these calls. Also change
the NumberToFinish parameter from UINT32* to UINTN
NumberToFinishAddress to resolve issues with conversion from
a volatile pointer to a non-volatile pointer. The assembly
code that receives the NumberToFinishAddress value must treat
that memory location as a volatile to track the number of APs.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
PiSmmCpuDxeSmm consumes SmmAttributesTable and setup page table:
1) Code region is marked as read-only and Data region is non-executable,
if the PE image is 4K aligned.
2) Important data structure is set to RO, such as GDT/IDT.
3) SmmSaveState is set to non-executable,
and SmmEntrypoint is set to read-only.
4) If static page is supported, page table is read-only.
We use page table to protect other components, and itself.
If we use dynamic paging, we can still provide *partial* protection.
And hope page table is not modified by other components.
The XD enabling code is moved to SmiEntry to let NX take effect.
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
We will put APs into hlt-loop in safe code. But we decrease mNumberToFinish
before APs enter into the safe code. Paolo pointed out this gap.
This patch is to move mNumberToFinish decreasing to the safe code. It could
make sure BSP could wait for all APs are running in safe code.
https://bugzilla.tianocore.org/show_bug.cgi?id=216
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
On S3 path, we will wake up APs to restore CPU context in PiSmmCpuDxeSmm
driver. However, we place AP in hlt-loop under 1MB space borrowed after CPU
restoring CPU contexts.
In case, one NMI or SMI happens, APs may exit from hlt state and execute the
instruction after HLT instruction. But the code under 1MB is no longer safe at
that time.
This fix is to allocate one ACPI NVS range to place the AP hlt-loop code. When
CPU finished restoration CPU contexts, AP will execute in this ACPI NVS range.
https://bugzilla.tianocore.org/show_bug.cgi?id=216
v2:
1. Make stack alignment per Laszlo's comment.
2. Trim whitespace at end of end.
3. Update year mark in file header.
Reported-by: Laszlo Ersek <lersek@redhat.com>
Analyzed-by: Paolo Bonzini <pbonzini@redhat.com>
Analyzed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
if PcdAcpiS3Enable is disabled, then skip S3 related logic.
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
REGISTER_TYPE in UefiCpuPkg/Include/AcpiCpuData.h defines a MemoryMapped
enum value. However support for the MemoryMapped enum is missing from
the implementation of SetProcessorRegister(). This patch adds support
for MemoryMapped type SetProcessorRegister().
One spin lock is added to avoid potential conflict when multiple processor
update the same memory space.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Update MSRs semaphores to the ones in allocated aligned semaphores
buffer. If MSRs semaphores is not enough, allocate one page more.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Update each CPU semaphores to the ones in allocated aligned
semaphores buffer.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Allocate each CPU semaphores in allocated aligned semaphores buffer.
And add it into semaphores structure.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Update all global semaphores to the ones in allocated aligned
semaphores buffer.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Get semaphores alignment/size requirement and allocate aligned
buffer for all global spin lock and semaphores.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Use the MSR MSR_IA32_MISC_ENABLE definition defined in UefiCpuPkg/Include and
remove the local definition.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
So that we can use write-protection for code later.
This is REPOST.
It includes the bug fix from "Paolo Bonzini" <pbonzini@redhat.com>:
Title: fix generation of 32-bit PAE page tables
"Bits 1 and 2 are reserved in 32-bit PAE Page Directory Pointer Table
Entries (PDPTEs); see Table 4-8 in the SDM. With VMX extended page
tables, the processor notices and fails the VM entry as soon as CR0.PG
is set to 1."
And thanks "Laszlo Ersek" <lersek@redhat.com> to validate the fix.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Signed-off-by: "Paolo Bonzini" <pbonzini@redhat.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Cc: "Fan, Jeff" <jeff.fan@intel.com>
Cc: "Kinney, Michael D" <michael.d.kinney@intel.com>
Cc: "Laszlo Ersek" <lersek@redhat.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19067 6f19259b-4bc3-4df7-8a09-765794883524
Always set RW+P bit for page table by default.
So that we can use write-protection for code later.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Kinney, Michael D" <michael.d.kinney@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18960 6f19259b-4bc3-4df7-8a09-765794883524
Add NULL func for 2 new APIs in SmmCpuFeaturesLib.
SmmCpuFeaturesCompleteSmmReadyToLock() is a hook point to allow
CPU specific code to do more registers setting after
the gEfiSmmReadyToLockProtocolGuid notification is completely processed.
Add SmmCpuFeaturesCompleteSmmReadyToLock() to PerformRemainingTasks() and PerformPreTasks().
SmmCpuFeaturesAllocatePageTableMemory() is an API to allow
CPU to allocate a specific region for storing page tables.
All page table allocation will use AllocatePageTableMemory().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Kinney, Michael D" <michael.d.kinney@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18958 6f19259b-4bc3-4df7-8a09-765794883524
In this way, we can centralize the silicon configuration in
PerformRemainingTasks()/PerformPreTasks() function.
If there are more features need to be configured, they can put in
PerformRemainingTasks()/PerformPreTasks() only.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Kinney, Michael D" <michael.d.kinney@intel.com>
Reviewed-by: "Laszlo Ersek" <lersek@redhat.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18938 6f19259b-4bc3-4df7-8a09-765794883524
Move Gdt initialization from InitializeMpServiceData() to CPU Arch specific function.
We create SmmFuncsArch.c for hold CPU specific function, so that
EFI_IMAGE_MACHINE_TYPE_SUPPORTED(EFI_IMAGE_MACHINE_X64) can be removed.
For IA32 version, we always allocate new page for GDT entry, for easy maintenance.
For X64 version, we fixed TssBase in GDT entry to make sure TSS data is correct.
Remove TSS fixup for GDT in ASM file.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Fan, Jeff" <jeff.fan@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18937 6f19259b-4bc3-4df7-8a09-765794883524
The PiSmmCpuDxeSmm module is using PcdFrameworkCompatibilitySupport to
provide compatibility with the SMM support in the IntelFrameworkPkg.
This change removes the Framework compatibility and requires all SMM
modules that provide SMI handlers to follow the PI Specification.
Cc: Jeff Fan <jeff.fan@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18726 6f19259b-4bc3-4df7-8a09-765794883524
The PiSmmCpuDxeSmm module does not use any services from the SmmLib.
This change removes the SmmLib from PiSmmCpuDxeSmm module and also
removes the lib mapping in the UefiCpuPkg DSC file because no other
modules in the UefiCpuPkg use the SmmLib.
Removal of SmmLib is now possible because the only API call to it,
ClearSmi(), was ultimately removed from PiSmmCpuDxeSmm -- see the
"BUGBUG" comment in git commit 529a5a86.
Cc: "Yao, Jiewen" <jiewen.yao@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: "Yao, Jiewen" <jiewen.yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18673 6f19259b-4bc3-4df7-8a09-765794883524
Add module that initializes a CPU for the SMM environment and
installs the first level SMI handler. This module along with the
SMM IPL and SMM Core provide the services required for
DXE_SMM_DRIVERS to register hardware and software SMI handlers.
CPU specific features are abstracted through the SmmCpuFeaturesLib
Platform specific features are abstracted through the
SmmCpuPlatformHookLib
Several PCDs are added to enable/disable features and configure
settings for the PiSmmCpuDxeSmm module
Changes between [PATCH v1] and [PATCH v2]:
1) Swap PTE init order for QEMU compatibility.
Current PTE initialization algorithm works on HW but breaks QEMU
emulator. Update the PTE initialization order to be compatible
with both.
2) Update comment block that describes 32KB SMBASE alignment requirement
to match contents of Intel(R) 64 and IA-32 Architectures Software
Developer's Manual
3) Remove BUGBUG comment and call to ClearSmi() that is not required.
SMI should be cleared by root SMI handler.
[jeff.fan@intel.com: Fix code style issues reported by ECC]
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
[pbonzini@redhat.com: InitPaging: prepare PT before filling in PDE]
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18645 6f19259b-4bc3-4df7-8a09-765794883524