Update TransferApToSafeState() use UINTN params to reduce the
number of type casts required in these calls. Also change
the NumberToFinish parameter from UINT32* to UINTN
NumberToFinishAddress to resolve issues with conversion from
a volatile pointer to a non-volatile pointer. The assembly
code that receives the NumberToFinishAddress value must treat
that memory location as a volatile to track the number of APs.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
PiSmmCpuDxeSmm consumes SmmAttributesTable and setup page table:
1) Code region is marked as read-only and Data region is non-executable,
if the PE image is 4K aligned.
2) Important data structure is set to RO, such as GDT/IDT.
3) SmmSaveState is set to non-executable,
and SmmEntrypoint is set to read-only.
4) If static page is supported, page table is read-only.
We use page table to protect other components, and itself.
If we use dynamic paging, we can still provide *partial* protection.
And hope page table is not modified by other components.
The XD enabling code is moved to SmiEntry to let NX take effect.
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
We will put APs into hlt-loop in safe code. But we decrease mNumberToFinish
before APs enter into the safe code. Paolo pointed out this gap.
This patch is to move mNumberToFinish decreasing to the safe code. It could
make sure BSP could wait for all APs are running in safe code.
https://bugzilla.tianocore.org/show_bug.cgi?id=216
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
On S3 path, we will wake up APs to restore CPU context in PiSmmCpuDxeSmm
driver. However, we place AP in hlt-loop under 1MB space borrowed after CPU
restoring CPU contexts.
In case, one NMI or SMI happens, APs may exit from hlt state and execute the
instruction after HLT instruction. But the code under 1MB is no longer safe at
that time.
This fix is to allocate one ACPI NVS range to place the AP hlt-loop code. When
CPU finished restoration CPU contexts, AP will execute in this ACPI NVS range.
https://bugzilla.tianocore.org/show_bug.cgi?id=216
v2:
1. Make stack alignment per Laszlo's comment.
2. Trim whitespace at end of end.
3. Update year mark in file header.
Reported-by: Laszlo Ersek <lersek@redhat.com>
Analyzed-by: Paolo Bonzini <pbonzini@redhat.com>
Analyzed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Commits 28ee581646 and 246cd9085f added these ENDs as part of the
manual conversion from *.asm files. However, the ENDs makes no sense for
NASM. Although they don't break the build, NASM complains about them:
label alone on a line without a colon might be in error
(This NASM warning category dates back to NASM 0.95, commit
6768eb71d8deb.)
Remove the ENDs.
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Use 16bit assembly code to replace hard code db.
In V2:
Add 0x67 prefix to far jump
When we enter protected mode, with the far jump still in big real mode,
the JMP instruction not only needs the 0x66 prefix (for 32-bit operand
size), but also the 0x67 prefix (for 32-bit address size). Use the a32
nasm modifier to enforce this.
This bug breaks S3 resume in the Ia32 + SMM_REQUIRE build of OVMF.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
Ia32/MpFuncs.asm to Ia32/MpFuncs.nasm.
And, manually update it to pass build.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Update all global semaphores to the ones in allocated aligned
semaphores buffer.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
So that we can use write-protection for code later.
It is REPOST.
It includes suggestion from Michael Kinney <michael.d.kinney@intel.com>:
- "For IA32 assembly, can we combine into a single OR instruction that
sets both page enable and WP?"
- "For X64, does it make sense to use single OR instruction instead of 2
BTS instructions as well?"
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Cc: "Fan, Jeff" <jeff.fan@intel.com>
Cc: "Kinney, Michael D" <michael.d.kinney@intel.com>
Cc: "Laszlo Ersek" <lersek@redhat.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19068 6f19259b-4bc3-4df7-8a09-765794883524
So that we can use write-protection for code later.
This is REPOST.
It includes the bug fix from "Paolo Bonzini" <pbonzini@redhat.com>:
Title: fix generation of 32-bit PAE page tables
"Bits 1 and 2 are reserved in 32-bit PAE Page Directory Pointer Table
Entries (PDPTEs); see Table 4-8 in the SDM. With VMX extended page
tables, the processor notices and fails the VM entry as soon as CR0.PG
is set to 1."
And thanks "Laszlo Ersek" <lersek@redhat.com> to validate the fix.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Signed-off-by: "Paolo Bonzini" <pbonzini@redhat.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Cc: "Fan, Jeff" <jeff.fan@intel.com>
Cc: "Kinney, Michael D" <michael.d.kinney@intel.com>
Cc: "Laszlo Ersek" <lersek@redhat.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19067 6f19259b-4bc3-4df7-8a09-765794883524
Always set RW+P bit for page table by default.
So that we can use write-protection for code later.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Kinney, Michael D" <michael.d.kinney@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18960 6f19259b-4bc3-4df7-8a09-765794883524
SmmDebug feature is implemented in ASM, which is not easy to maintain.
So we move it to C function.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Kinney, Michael D" <michael.d.kinney@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18946 6f19259b-4bc3-4df7-8a09-765794883524
Move Gdt initialization from InitializeMpServiceData() to CPU Arch specific function.
We create SmmFuncsArch.c for hold CPU specific function, so that
EFI_IMAGE_MACHINE_TYPE_SUPPORTED(EFI_IMAGE_MACHINE_X64) can be removed.
For IA32 version, we always allocate new page for GDT entry, for easy maintenance.
For X64 version, we fixed TssBase in GDT entry to make sure TSS data is correct.
Remove TSS fixup for GDT in ASM file.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Fan, Jeff" <jeff.fan@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18937 6f19259b-4bc3-4df7-8a09-765794883524
TSS segment should use (SIZE - 1) as limit, and do not set G bit (highest bit of LimitHigh) because limit means byte count.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Reviewed-by: "Fan, Jeff" <jeff.fan@intel.com>
Reviewed-by: "Kinney, Michael D" <michael.d.kinney@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18935 6f19259b-4bc3-4df7-8a09-765794883524
Add module that initializes a CPU for the SMM environment and
installs the first level SMI handler. This module along with the
SMM IPL and SMM Core provide the services required for
DXE_SMM_DRIVERS to register hardware and software SMI handlers.
CPU specific features are abstracted through the SmmCpuFeaturesLib
Platform specific features are abstracted through the
SmmCpuPlatformHookLib
Several PCDs are added to enable/disable features and configure
settings for the PiSmmCpuDxeSmm module
[jeff.fan@intel.com: Fix code style issues reported by ECC]
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18646 6f19259b-4bc3-4df7-8a09-765794883524