Search uCode costs much time, if AP has same processor type
with BSP, AP can use BSP saved uCode info to get better performance.
This change enables this solution.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Read uCode from memory has better performance than from flash.
But it needs extra effort to let BSP copy uCode from flash to
memory. Also BSP already enable cache in SEC phase, so it use
less time to relocate uCode from flash to memory. After
verification, if system has more than one processor, it will
reduce some time if load uCode from memory.
This change enable this optimization.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Today's MpInitLib PEI implementation directly calls
PeiServices->GetHobList() from AP which may cause racing issue.
This patch fixes this issue by duplicating IDT for APs.
Because CpuMpData structure is stored just after IDT, the CpuMPData
address equals to IDTR.BASE + IDTR.LIMIT + 1.
v2:
1. Add ALIGN_VALUE() on BufferSize.
2. Add ASSERT() to make sure no memory usage outside of the allocated buffer.
3. Add more comments in InitConfig path when restoring CpuData[0].VolatileRegisters.
Cc: Jeff Fan <vanjeff_919@hotmail.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Fish Andrew <afish@apple.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Removing rules for Ipf sources file:
* Remove the source file which path with "ipf" and also listed in
[Sources.IPF] section of INF file.
* Remove the source file which listed in [Components.IPF] section
of DSC file and not listed in any other [Components] section.
* Remove the embedded Ipf code for MDE_CPU_IPF.
Removing rules for Inf file:
* Remove IPF from VALID_ARCHITECTURES comments.
* Remove DXE_SAL_DRIVER from LIBRARY_CLASS in [Defines] section.
* Remove the INF which only listed in [Components.IPF] section in DSC.
* Remove statements from [BuildOptions] that provide IPF specific flags.
* Remove any IPF sepcific sections.
Removing rules for Dec file:
* Remove [Includes.IPF] section from Dec.
Removing rules for Dsc file:
* Remove IPF from SUPPORTED_ARCHITECTURES in [Defines] section of DSC.
* Remove any IPF specific sections.
* Remove statements from [BuildOptions] that provide IPF specific flags.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chen A Chen <chen.a.chen@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
1. Do not use tab characters
2. No trailing white space in one line
3. All files must end with CRLF
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Replace old Perf macros with the new added ones.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The MdePkg/Library/SmmMemoryAllocationLib, used only by DXE_SMM_DRIVER,
allows to free memory allocated in DXE (before EndOfDxe). This is done
by checking the memory range and calling gBS services to do real
operation if the memory to free is out of SMRAM. If some memory related
features, like Heap Guard, are enabled, gBS interface will turn to
EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes(), provided by
DXE driver UefiCpuPkg/CpuDxe, to change memory paging attributes. This
means we have part of DXE code running in SMM mode in certain
circumstances.
Because page table in SMM mode is different from DXE mode and CpuDxe
always uses current registers (CR0, CR3, etc.) to get memory paging
attributes, it cannot get the correct attributes of DXE memory in SMM
mode from SMM page table. This will cause incorrect memory manipulations,
like fail the releasing of Guard pages if Heap Guard is enabled.
The solution in this patch is to store the DXE page table information
(e.g. value of CR0, CR3 registers, etc.) in a global variable of CpuDxe
driver. If CpuDxe detects it's in SMM mode, it will use this global
variable to access page table instead of current processor registers.
This can avoid retrieving wrong DXE memory paging attributes and changing
SMM page table attributes unexpectedly.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
On AMD processors the second SendIpi in the SendInitSipiSipi and
SendInitSipiSipiAllExcludingSelf routines is not required, and may cause
undesired side-effects during MP initialization.
This patch leverages the StandardSignatureIsAuthenticAMD check to exclude
the second SendIpi and its associated MicroSecondDelay (200).
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Leo Duran <leo.duran@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
NASM has replaced ASM and S files.
1. Remove ASM from all modules expect for the ones in ResetVector directory.
The ones in ResetVector directory are included by Vtf0.nasmb. They are
also nasm style.
2. Remove S files from the drivers only.
3. https://bugzilla.tianocore.org/show_bug.cgi?id=881
After NASM is updated, S files can be removed from Library.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
According to IA manual:
"Before setting this bit (MSR_IA32_MISC_ENABLE[22]) , BIOS must
execute the CPUID.0H and examine the maximum value returned in
EAX[7:0]. If the maximum value is greater than 2, this bit is
supported."
We need to fix our current detection logic to compare against 2.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Ming Shao <ming.shao@intel.com>
The function SecStartup() is not supposed to return. Hence, add the
NORETURN decorator.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
NASM introduced FXSAVE / FXRSTOR support in commit 900fa5b26b8f ("NASM
0.98p3-hpa", 2002-04-30), which commit stands for the nasm-0.98p3-hpa
release.
NASM introduced FXSAVE64 / FXRSTOR64 support in commit 3a014348ca15
("insns: add FXSAVE64/FXRSTOR64, drop np prefix", 2010-07-07), which was
part of the "nasm-2.09" release.
Edk2 requires nasm-2.10 or later for use with the GCC toolchain family,
and nasm-2.12.01 or later for use with all other toolchain families.
Replace the binary encoding of the FXSAVE(64)/FXRSTOR(64) instructions
with mnemonics.
I verified that the "Ia32/SmiException.obj", "X64/SmiEntry.obj" and
"X64/SmiException.obj" files are rebuilt after this patch, without any
change in content.
This patch removes the last instructions encoded with DBs from
PiSmmCpuDxeSmm.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
(1) SmmRelocationSemaphoreComplete32() runs in 32-bit mode, so wrap it in
a (BITS 32 ... BITS 64) bracket.
(2) SmmRelocationSemaphoreComplete32() currently compiles to:
> 000002AE C6050000000001 mov byte [dword 0x0],0x1
> 000002B5 FF2500000000 jmp dword [dword 0x0]
where the first instruction is patched with the contents of
"mRebasedFlag" (so that (*mRebasedFlag) is set to 1), and the second
instruction is patched with the address of
"mSmmRelocationOriginalAddress" (so that we jump to
"mSmmRelocationOriginalAddress").
In its current form the first instruction could not be patched with
PatchInstructionX86(), given that the operand to patch is not encoded
in the trailing bytes of the instruction. Therefore, adopt an
EAX-based version, inspired by both the IA32 and X64 variants of
SmmRelocationSemaphoreComplete():
> 000002AE 50 push eax
> 000002AF B800000000 mov eax,0x0
> 000002B4 C60001 mov byte [eax],0x1
> 000002B7 58 pop eax
> 000002B8 FF2500000000 jmp dword [dword 0x0]
Here both instructions can be patched with PatchInstructionX86(), and
the DBs can be replaced with native NASM syntax.
(3) Turn the "mRebasedFlagAddr32" and "mSmmRelocationOriginalAddressPtr32"
variables into markers that suit PatchInstructionX86().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmmInitStack" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".
The size of the patched source operand is (sizeof (UINTN)).
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its
PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can
simply use the NASM syntax for the following Mixed-Size Jump:
> jmp PROTECT_MODE_CS : dword @32bit
The generated object code for the instruction is unchanged:
> 00000182 66EA5A0000000800 jmp dword 0x8:0x5a
(The NASM manual explains that putting the DWORD prefix after the colon
":" reflects the intent better, since it is the offset that is a DWORD.
Thus, that's what I used. However, both syntaxes are interchangeable,
hence the ndisasm output.)
The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr";
however that's accidental, not inherent:
- Bring LONG_MODE_CODE_SEGMENT from
"UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as
LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32
version as PROTECT_MODE_CS earlier.
- Apply the NASM-native Mixed-Size Jump syntax again, but jump to the
fixed zero offset in LONG_MODE_CS. This will produce no relocation
record at all. Add a label after the instruction.
- Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards
from the label. Because we modify the DWORD offset with a DWORD access,
the segment selector is unharmed in the instruction, and we need not set
it from PiCpuSmmEntry().
According to "objdump --reloc", the X64 version undergoes only the
following relocations, after this patch:
> RELOCATION RECORDS FOR [.text]:
> OFFSET TYPE VALUE
> 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004
> 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004
> 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004
Therefore the patch does not regress
<https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool
chain for UefiCpuPkg with nasm source code").
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Like "gSmmCr4" in the previous patch, "gSmmCr0" is not only used for
machine code patching, but also as a means to communicate the initial CR0
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr0Value" instruction's binary
representation are utilized as normal data too.
In order to get rid of the DB for "mov eax, Cr0Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr0" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr0" variable to
"gPatchSmmCr0" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr0".
This lets us remove the binary (DB) encoding of "mov eax, Cr0Value" in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Unlike "gSmmCr3" in the previous patch, "gSmmCr4" is not only used for
machine code patching, but also as a means to communicate the initial CR4
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr4Value" instruction's binary
representation are utilized as normal data too.
In order to get rid of the DB for "mov eax, Cr4Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr4" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr4" variable to
"gPatchSmmCr4" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr4".
This lets us remove the binary (DB) encoding of "mov eax, Cr4Value" in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmmCr3" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
(This patch is the 64-bit variant of commit e75ee97224,
"UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()",
2018-01-31.)
The SmmStartup() function executes in SMM, which is very similar to real
mode. Add "BITS 16" before it and "BITS 64" after it (just before the
@LongMode label).
Remove the manual 0x66 operand-size override prefixes, for selecting
32-bit operands -- the sizes of our operands trigger NASM to insert the
prefixes automatically in almost every spot. The one place where we have
to add it back manually is the LGDT instruction. In the LGDT instruction
we also replace the binary 0x2E prefix with the normal NASM syntax for CS
segment override.
The stores to the Control Registers were always 32-bit wide; the source
code only used RAX as source operand because it generated the expected
object code (with NASM compiling the source as if in BITS 64). With BITS
16 added, we can use the actual register width in the source operands
(EAX).
This patch causes NASM to generate byte-identical object code (determined
by disassembling both the pre-patch and post-patch versions, and comparing
the listings), except:
> @@ -231,7 +231,7 @@
> 000001D2 6689D3 mov ebx,edx
> 000001D5 66B800000000 mov eax,0x0
> 000001DB 0F22D8 mov cr3,eax
> -000001DE 662E670F0155F6 o32 lgdt [cs:ebp-0xa]
> +000001DE 2E66670F0155F6 o32 lgdt [cs:ebp-0xa]
> 000001E5 66B800000000 mov eax,0x0
> 000001EB 80CC02 or ah,0x2
> 000001EE 0F22E0 mov cr4,eax
The only difference is the prefix list order, it changes from:
- 0x66, 0x2E, 0x67
to
- 0x2E, 0x66, 0x67
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
"mXdSupported" is a global BOOLEAN variable, initialized to TRUE. The
CheckFeatureSupported() function is executed on all processors (not
concurrently though), called from SmmInitHandler(). If XD support is found
to be missing on any CPU, then "mXdSupported" is set to FALSE, and further
processors omit the check. Afterwards, "mXdSupported" is read by several
assembly and C code locations.
The tricky part is *where* "mXdSupported" is allocated (defined):
- Before commit 717fb60443 ("UefiCpuPkg/PiSmmCpuDxeSmm: Add paging
protection.", 2016-11-17), it used to be a normal global variable,
defined (allocated) in "SmmProfile.c".
- With said commit, we moved the definition (allocation) of "mXdSupported"
into "SmiEntry.nasm". The variable was defined over the last byte of a
"mov al, 1" instruction, so that setting it to FALSE in
CheckFeatureSupported() would patch the instruction to "mov al, 0". The
subsequent conditional jump would change behavior, plus all further read
references to "mXdSupported" (in C and assembly code) would read back
the source (imm8) operand of the patched MOV instruction as data.
This trick required that the MOV instruction be encoded with DB.
In order to get rid of the DB, we have to split both roles: we need a
label for the code patching, and "mXdSupported" has to be defined
(allocated) independently of the code patching. Of course, their values
must always remain in sync.
(1) Reinstate the "mXdSupported" definition and initialization in
"SmmProfile.c" from before commit 717fb60443. Change the assembly
language definition ("global") to a declaration ("extern").
(2) Define the "gPatchXdSupported" label (type X86_ASSEMBLY_PATCH_LABEL)
in "SmiEntry.nasm", and add the C-language declaration to
"SmmProfileInternal.h". Replace the DB with the MOV mnemonic (keeping
the imm8 source operand with value 1).
(3) In CheckFeatureSupported(), whenever "mXdSupported" is set to FALSE,
patch the assembly code in sync, with PatchInstructionX86().
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmiCr3" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmiEntry.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmiStack" so that its association with
PatchInstructionX86() is clear from the declaration. Also change its type
to X86_ASSEMBLY_PATCH_LABEL.
Unlike "gSmbase" in the previous patch, "gSmiStack"'s patched value is
also de-referenced by C code (in other words, it is read back after
patching): the InstallSmiHandler() function stores "CpuIndex" to the given
CPU's SMI stack through "gSmiStack". Introduce the local variable
"CpuSmiStack" in InstallSmiHandler() for calculating the stack location
separately, then use this variable for both patching into the assembly
code, and for storing "CpuIndex" through it.
It's assumed that "volatile" stood in the declaration of "gSmiStack"
because we used to read "gSmiStack" back for de-referencing; with that use
gone, we can remove "volatile" too. (Note that the *target* of the pointer
was never volatile-qualified.)
Finally, replace the binary (DB) encoding of "mov esp, imm32" in
"SmiEntry.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Rename the variable to "gPatchSmbase" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmiEntry.nasm".
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Within function ApWakeupFunction():
When source level debugger is enabled, AP interrupts will be enabled by
EnableDebugAgent(). Then the AP function will be executed by:
Procedure (Parameter);
After the AP function returns, AP interrupts will be disabled when the
APs are placed in loop mode (both HltLoop and MwaiLoop).
However, at ExitBootServices, ApWakeupFunction() is called with
'Procedure' equals to RelocateApLoop().
(ExitBootServices callback registered within InitMpGlobalData())
RelocateApLoop() never returns, so it has to disable the AP interrupts by
itself. However, we find that interrupts are only disabled for the
HltLoop case, but not for the MwaitLoop case (within file MpFuncs.nasm).
This commit adds the missing disabling of AP interrupts for MwaitLoop.
Also, for X64, this commit will disable the interrupts before switching to
32-bit mode.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
FixedPcdGetSize() is used as the macro value, PcdGetSize() is used as global
variable or function. Here usage is to access macro value.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Wang Jian J <jian.j.wang@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
if PcdDxeNxMemoryProtectionPolicy is enabled for EfiReservedMemoryType
of memory, #PF will be triggered for each APs after ExitBootServices
in SCRT test. The root cause is that AP wakeup code executed at that
time is stored in memory of type EfiReservedMemoryType (referenced by
global mReservedApLoopFunc), which is marked as non-executable.
This patch fixes this issue by setting memory of mReservedApLoopFunc to
be executable immediately after allocation.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Boolean values do not need to use explicit comparisons
to TRUE or FALSE.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This issue is introduced at following commit, which tried to add stack
switch support on behalf of Stack Guard feature.
0ff5aa9cae
The field KnownGoodStackTop in CPU_EXCEPTION_INIT_DATA is initialized to
the start address of array mNewStack. This is wrong. It must be the end
of mNewStack. This patch fixes this mistake.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
V2: Just update the commit message to reference the hash value of
new performance infrastructure.
Our new performance infrastructure (edk2 trunk commit hash value:
SHA-1: 73fef64f14 ~
SHA-1: 115eae650b)can support to
dump performance date form ACPI table in OS. So we can remove
the old perf code to write performance data to OS.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
V2: Just update the commit message.
Add more perf entry to hook BootScriptDonePpi/EndOfPeiPpi/
EndOfS3Resume.
Add the new perf entry with Identifier
PERF_INMODULE_START_ID/PERF_INMODULE_END_ID which are defined
in new performance infrastructure (edk2 trunk commit hash value:
SHA-1: 73fef64f14 ~
SHA-1: 115eae650b).
PERF_INMODULE_START_ID/PERF_INMODULE_END_ID are general Identifier
which are used within a module.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Today's McaInitialize() doesn't check State value before initialize
MCi_CTL and MCi_STATUS.
The patch fixes this issue by only initializing the two kinds of
MSRs when State is enabled.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Today's implementation only assumes SandyBridge CPU supports
Extended On-Demand Clock Modulation Duty Cycle.
Actually it is supported when CPUID.06h.EAX[5] == 1.
When platform requests 50% throttling, it causes value 1000b
set to the low-4 bits of IA32_CLOCK_MODULATION.
But the wrong code sets 1000b to bits[1-3] which causes assertion.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
> v2:
> Reduce the number of page to update/restore from 3 to 2 because DF
> has no effect in this issue.
The infinite loop is caused by the memory instruction, such as
"rep mov", operating on memory block crossing boundary of NON-PRESENT
pages. Because the address triggering page fault set in CR2 will be in
the first page, SmmProfilePFHandler() will only change the first page
into PRESENT. The page following will be still in NON-PRESENT status.
Since SmmProfilePFHandler() will setup single-step trap for the
instruction causing #PF, when the handler returns back to the
instruction and re-execute it, both #DB and #PF will be triggered
because the instruction wants to access both first and second page
but only first page is PRESENT.
Normally #DB exception will be handled first and its handler will
change first page back to NON-PRESENT status. Then #PF is handled
and its handler will change first page to PRESENT status again and
setup another single-step for the instruction triggering #PF. Then
the whole system falls into an infinite loop and the memory operation
will never move on.
This patch fix above situation by always changing 2 pages to PRESENT
status instead of just 1 page. Those 2 pages include the page causing
#PF and the page after it.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
SMM emulation under both KVM and QEMU (TCG) crashes the guest when the
"jz" branch, added in commit d4d87596c1 ("UefiCpuPkg/PiSmmCpuDxeSmm:
Enable NXE if it's supported", 2018-01-18), is taken.
Rework the propagation of CPUID.80000001H:EDX.NX [bit 20] to IA32_EFER.NXE
[bit 11] so that no code is executed conditionally.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Ref: http://mid.mail-archive.com/d6fff558-6c4f-9ca6-74a7-e7cd9d007276@redhat.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
[lersek@redhat.com: XD -> NX code comment updates from Ray]
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
[lersek@redhat.com: mark QEMU/TCG as well in the commit message]
The SmmStartup() executes in SMM, which is very similar to real mode. Add
"BITS 16" before it and "BITS 32" after it (just before the @32bit label).
Remove the manual 0x66 operand-size override prefixes, for selecting
32-bit operands -- the sizes of our operands trigger NASM to insert the
prefixes automatically in almost every spot. The one place where we have
to add it back manually is the LGDT instruction. (The 0x67 address-size
override prefix is also auto-generated.)
This patch causes NASM to generate byte-identical object code (determined
by disassembling both the pre-patch and post-patch versions, and comparing
the listings), except:
> @@ -158,7 +158,7 @@
> 00000142 6689D3 mov ebx,edx
> 00000145 66B800000000 mov eax,0x0
> 0000014B 0F22D8 mov cr3,eax
> -0000014E 67662E0F0155F6 o32 lgdt [cs:ebp-0xa]
> +0000014E 2E66670F0155F6 o32 lgdt [cs:ebp-0xa]
> 00000155 66B800000000 mov eax,0x0
> 0000015B 0F22E0 mov cr4,eax
> 0000015E 66B9800000C0 mov ecx,0xc0000080
The only difference is the prefix list order, it changes from:
- 0x67, 0x66, 0x2E
to
- 0x2E, 0x66, 0x67
(0x2E is "CS segment override").
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
The gSmmCr3, gSmmCr4, gSmmCr0 and gSmmJmpAddr global variables are used
for patching assembly instructions, thus we can't yet remove the DB
encodings for those instructions. At least we should add the intended
meanings in comments.
This patch only changes comments.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
[lersek@redhat.com: adapt commit msg to ongoing PatchAssembly discussion]
The reason doing this is that we found that calling StartupAllAps() to
flush TLB for all APs in CpuDxe driver after changing page attributes
will spend a lot of time to complete. If there are many page attributes
update requests, the whole system performance will be slowed down
explicitly, including any shell command and UI operation.
The solution is removing the flush operation for AP in CpuDxe driver
and let AP flush TLB after woken up.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
The reason doing this is that we found that calling StartupAllAps() to
flush TLB for all APs in CpuDxe driver after changing page attributes
will spend a lot of time to complete. If there are many page attributes
update requests, the whole system performance will be slowed down
explicitly, including any shell command and UI operation.
The solution is removing the flush operation for AP in CpuDxe driver.
Since TLB is always flushed in HLT loop mode, we just need to enforce
a TLB flush for mwait loop mode.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This issue is introduced by a patch at
f32bfe6d06
The above patch miss the case of 64-bit PEI, which will link
X64/MpFuncs.nasm instead of Ia32/MpFuncs.nasm. For X64/MpFuncs.nasm,
ExchangeInfo->ModeHighMemory should be always initialized no matter
if separate wakeup buffer is allocated or not. Ia32/MpFuncs.nasm will
not need ModeHighMemory during AP init. So the changes made in this
patch should not affect the functionality of it.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Every processor's StartupApSignal is initialized in
MpInitLibInitialize() before calling CollectProcessorCount().
When SortApicId() is called from CollectProcessorCount(), AP Index
is re-assigned by APIC ID. But SortApicId() forgets to set the
correct StartupApSignal when sorting the AP.
The patch fixes this issue.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Cc: Chasel Chiu <chasel.chiu@intel.com>
To fix an issue in which enabling NX feature will mark the AP wakeup
buffer as non-executable and fail the AP init, the buffer was split
into two part: the lower part in memory within 1MB and the higher part
within allocated executable memory (EfiBootServicesCode). But the
address of higher part memory was stored in lower part memory, which
is actually shared with legacy components and will be overwritten by
LegacyBiosDxe driver if CSM is enabled.
This patch fixes this issue by storing the address of higher part
memory in CpuMpData instead of ExchangeInfo.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
MtrrSetMemoryAttributesInMtrrSettings() is a batch-set API.
When setting multiple ranges of memory attributes, the single-set
API (MtrrSetMemoryAttributeInMtrrSettings and MtrrSetMemoryAttribute)
may fail, but batch-set API may succeed.
Add comments to recommend caller to use batch-set API when setting
multiple ranges.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Ming Shao <ming.shao@intel.com>
GetWakeupBuffer() tries to find a below-1M free memory, it checks
whether the memory is allocated already in
CheckOverlapWithAllocatedBuffer(). When there is a memory allocation
hob (base = 0xff_00000000, size = 0x10000000),
CheckOverlapWithAllocateBuffer() truncates the base to 0 which causes
it always returns TRUE so GetWakeupBuffer() fails to find a below-1MB
memory.
The patch fixes this issue by using UINT64 type.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
If features like memory profile, protection and heap guard are enabled,
a lot of more memory page attributes update actions will happen than
usual. An unnecessary sync of CR0.WP setting among APs will then cause
worse performance in memory allocation action. Removing the calling of
SyncMemoryPageAttributesAp() in function DisableReadOnlyPageWriteProtect
and EnableReadOnlyPageWriteProtect can fix this problem. In DEBUG build
case, the boot performance can be boosted from 11 minute to 6 minute.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory
of EfiBootServicesCode, EfiConventionalMemory, the BIOS will hang at a page
fault exception triggered by PiSmmCpuDxeSmm.
The root cause is that PiSmmCpuDxeSmm will access default SMM RAM starting
at 0x30000 which is marked as non-executable, but NX feature was not
enabled during SMM initialization. Accessing memory which has invalid
attributes set will cause page fault exception. This patch fixes it by
checking NX capability in cpuid and enable NXE in EFER MSR if it's
available.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory
of EfiBootServicesCode, EfiConventionalMemory and EfiReservedMemoryType,
the BIOS will hang at a page fault exception randomly.
The root cause is that the memory allocation for driver images (actually
a memory type conversion from free memory, type of EfiConventionalMemory,
to code memory, type of EfiBootServicesCode/EfiRuntimeServicesCode)
will get memory with NX set, because the CpuDxe driver will keep the NX
attribute (with free memory) in page directory during page table splitting
and then override the NX attribute of all its entries.
This patch fixes this issue by not inheriting NX attribute when turning
a page entry into a page directory during page granularity split.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory
of EfiBootServicesData, EfiConventionalMemory, the BIOS will reset after
timer initialized and started.
The root cause is that the memory used to hold the exception and interrupt
handler is allocated with type of EfiBootServicesData and marked as
non-executable due to NX feature enabled. This patch fixes it by allocating
EfiBootServicesCode type of memory for those handlers instead.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory
of EfiBootServicesCode, EfiConventionalMemory, the BIOS will hang at a page
fault exception during MP initialization.
The root cause is that the AP wake up buffer, which is below 1MB and used
to hold both AP init code and data, is type of EfiConventionalMemory (not
really allocated because of potential conflict with legacy code), and is
marked as non-executable. During the transition from real address mode
to long mode, the AP init code has to enable paging which will then cause
itself a page fault exception because it's just running in non-executable
memory.
The solution is splitting AP wake up buffer into two part: lower part is
still below 1MB and shared with legacy system, higher part is really
allocated memory of BootServicesCode type. The init code in the memory
below 1MB will not enable paging but just switch to protected mode and
jump to higher memory, in which the init code will enable paging and
switch to long mode.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
In 32-bit mode, the BIOS will not create page table for memory beyond
4GB and therefore it cannot handle the attributes change request for
those memory. But current CpuDxe doesn't check this situation and still
try to complete the request, which will cause attributes of incorrect
memory address to be changed due to type cast from 64-bit to 32-bit.
This patch fixes this issue by checking the end address of input
memory block and returning EFI_UNSUPPORTED if it's out of range.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Commits a2ea6894e6
* UefiCpuPkg/MpInitLib: Fix a bug that AP enters timer INT handler
masked the interrupts in AP.
But it didn't unmask the interrupt in new BSP when Switch BSP
happens.
The patch fixed this issue.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Cc: Eric Dong <eric.dong@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=849
In V2, use "mov rax, strict qword 0" to replace the hard code db.
1. Use lea instruction to get the address instead of mov instruction.
2. Use the dummy address as jmp destination, and add the logic to fix up
the address to the absolute address at boot time.
3. On MpFuncs.nasm, use ExchangeInfo to record InitializeFloatingPointUnits.
This way is same to MpInitLib.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=849
In V2, use "mov rax, strict qword 0" to replace the hard code db.
1. Use lea instruction to get the address instead of mov instruction.
2. Use the dummy address as jmp destination, and add the logic to fix up
the address to the absolute address at boot time.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=849
In V2, use mov rax, strict qword 0 to replace the hard code db.
Use the dummy address as jmp destination, and add the logic to fix up
the address to the absolute address at boot time.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Enhance MCA feature dependency check base on SDM pseudocode example 15-1.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
AllocateCodePages() is used to allocate buffer for IDT range,
the code pages will be set to RO in SetMemMapAttributes(),
then the code to set IDT range to RO in PatchGdtIdtMap() is
redundant and could be removed.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
When StackGuard is enabled on IA32, the #double fault exception
is reported instead of #page fault.
This issue does not exist on X64, or IA32 without StackGuard.
The fix at e4435f710c was incomplete.
It is because AllocateCodePages() is used to allocate buffer for
GDT and TSS, the code pages will be set to RO in SetMemMapAttributes().
But IA32 Stack Guard need use task switch to switch stack that need
write GDT and TSS, so AllocateCodePages() could not be used.
This patch uses AllocatePages() instead of AllocateCodePages() to
allocate buffer for GDT and TSS if StackGuard is enabled on IA32.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
0 40 f0 100
+---WT--+--UC--+--WT--+-----WB----+----UC----+
When calculating the shortest path from 0 to 100, the
MtrrLibCalculateLeastMtrrs() is called to update the
Vertices.Previous.
When calculating the shortest path from 0 to 40,
MtrrLibCalculateLeastMtrrs() is called recursively to update the
Vertices.Previous.
The second call corrupt the Previous value that will be used
later.
The patch removes the code that corrupts Previous.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
80 A8 B0 B8 C0
+----------WB--------+-UC-+-WT-+-WB-+
For above memory settings, current code caused the final MTRR
settings miss [A8, B0, UC] when default memory type is UC.
The root cause is the code only checks the mandatory weight
between A8 to B0, but skips to check the optional weight.
The patch fixes this issue.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
The patch only change the comments and variable name so
doesn't impact the functionality.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
*SetMemoryAttribute*() API cannot handle the setting request that
looks like <0, MAX_ADDRESS, Type>. The buggy parameter checking
logic returns Unsupported for this case.
The patch fixes the checking logic to handle such case.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Code forgot to initialize the optional weight between adjacent
vertices. It caused wrong MTRR result was calculated for some
memory settings.
The logic was incorrectly removed when converting from POC
code. The patch adds back the initialization.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
MtrrSetMemoryAttributesInMtrrSettings() missed the debug messages
of memory attribute request and status. The patch moves all debug
messages from MtrrSetMemoryAttributeInMtrrSettings() to
MtrrSetMemoryAttributesInMtrrSettings() and refines the debug message
to carry more information.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
The reason is that DXE part initialization will reuse the stack allocated
at PEI phase, if MP was initialized before. Some code added to check this
situation and use stack base address saved in HOB passed from PEI.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
As the name suggests, CpuMpData->CpuInfoInHob[0].ApTopOfStack must be init
to the top of stack. But the MpInitLibInitialize() passed the base address
of stack to InitializeApData(), which is not correct. Although this stack
is not used for BSP, it's should be fixed in case of misunderstanding and
future possible code changes.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
When SourceLevelDebug is enabled, AP randomly executes the DXECORE
timer handler logic. The root cause is the interrupts are not
masked in AP wake up procedure.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Enhance DumpModuleImageInfo() for page fault with I/D set.
If it is page fault with I/D set, the (E/R)IP in SystemContext
could not be used for DumpModuleImageInfo(), instead of, the next
IP of the IP triggering this page fault could be found from stack
by (E/R)SP in SystemContext.
IA32 SDM:
— I/D flag (bit 4).
This flag is 1 if the access causing the page-fault exception was
an instruction fetch. This flag describes the access causing the
page-fault exception, not the access rights specified by paging.
The idea comes from SmiPFHandler () in
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c and
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Fix comment typo for MtrrLibApplyFixedMtrrs function
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Roll back commit 56649f4301.
The original names follows the spec definition.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
With correct model CPU, current checking logic will
always execute AsmReadMsr64 operation and then check
ECX.AESNI[bit 25] = 1. Update checking logic to check
ECX.AESNI[bit 25] = 1 first and then do AsmReadMsr64
operation.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
When CpuCommonFeaturesLib use RegisterCpuFeaturesLib to register
CPU features, the CpuFeaturesData->BitMaskSize has already been
initialized. So delete redundant PcdGetSize PcdCpuFeaturesSupport
in CpuInitDataInitialize.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
This reverts commit 5c59537c10.
Current code already has function IsCpuFeatureSupported to do
the feature validation, not need this check logic anymore.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Due to coding style fix of the structure definition in BaseLib.h, all
code referencing those structure must be updated accordingly.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>
AP has its own stack for code execution. If PcdCpuStackGuard is enabled,
the page at the bottom of stack of AP will be disabled (NOT PRESENT) to
monitor the stack overflow issue. This requires PcdCpuApStackSize to be
set with value more than one page of memory.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Change GetSupportPcds and GetConfigurationPcds to be singular
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
V2:
Update function name, add more detail description.
V1:
Check and assert invalid RegisterCpuFeature function parameter
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Rename SmmEndOfS3ResumeProtocolGuid to EndOfS3ResumeGuid as the GUID
may be used to install PPI in future to notify PEI phase code.
The references in UefiCpuPkg are also being updated.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
One of the functionalities of CpuDxe is to update memory paging attributes.
If page table protection is applied, it must be disabled temporarily before
any attributes update and enabled again afterwards.
This patch makes use of the same way as DxeIpl to allocate page table memory
from reserved memory pool, which helps to reduce potential "split" operation
and recursive calling of SetMemorySpaceAttributes().
Laszlo (lersek@redhat.com) did a regression test on QEMU virtual platform with
one middle version of this series patch. The details can be found at
https://lists.01.org/pipermail/edk2-devel/2017-December/018625.html
There're a few changes after his work.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
When printing the ascii format of memory attribute in debug message,
%s was used, but %a should be used.
The patch additionally changes %x to %r for EFI_STATUS.
The whole patch doesn't impact functionality of the MtrrLib.
Just debug message fix.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Ming Shao <ming.shao@intel.com>
In current MP implementation, BSP and AP shares the same exception
configuration. Stack switch required by Stack Guard feature needs that BSP
and AP have their own configuration. This patch adds code to ask BSP and AP
to do exception handler initialization separately.
Since AP is not supposed to do memory allocation, all memory needed to
setup stack switch will be reserved in BSP and pass to AP via new API
EFI_STATUS
EFIAPI
InitializeCpuExceptionHandlersEx (
IN EFI_VECTOR_HANDOFF_INFO *VectorInfo OPTIONAL,
IN CPU_EXCEPTION_INIT_DATA *InitData OPTIONAL
);
Following two new PCDs are introduced to configure how to setup new stack
for specified exception handlers.
gUefiCpuPkgTokenSpaceGuid.PcdCpuStackSwitchExceptionList
gUefiCpuPkgTokenSpaceGuid.PcdCpuKnownGoodStackSize
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Jiewen.yao@intel.com
In current implementation of CPU MP service, AP is initialized with data
copied from BSP. Stack switch required by Stack Guard feature needs different
GDT, IDT table and task gates for each logic processor. This patch adds GDTR,
IDTR and TR into structure CPU_VOLATILE_REGISTERS and related code in save
and restore methods. This can make sure that any changes to GDT, IDT and task
gate for an AP will be kept from overwritten by BSP settings.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Jiewen.yao@intel.com
If Stack Guard is enabled and there's really a stack overflow happened during
boot, a Page Fault exception will be triggered. Because the stack is out of
usage, the exception handler, which shares the stack with normal UEFI driver,
cannot be executed and cannot dump the processor information.
Without those information, it's very difficult for the BIOS developers locate
the root cause of stack overflow. And without a workable stack, the developer
cannot event use single step to debug the UEFI driver with JTAG debugger.
In order to make sure the exception handler to execute normally after stack
overflow. We need separate stacks for exception handlers in case of unusable
stack.
IA processor allows to switch to a new stack during handling interrupt and
exception. But X64 and IA32 provides different ways to make it. X64 provides
interrupt stack table (IST) to allow maximum 7 different exceptions to have
new stack for its handler. IA32 doesn't have IST mechanism and can only use
task gate to do it since task switch allows to load a new stack through its
task-state segment (TSS).
The new API, InitializeCpuExceptionHandlersEx, is implemented to complete
extra initialization for stack switch of exception handler. Since setting
up stack switch needs allocating new memory for new stack, new GDT table
and task-state segment but the initialization method will be called in
different phases which have no consistent way to reserve those memory, this
new API is allowed to pass the reserved resources to complete the extra
works. This is cannot be done by original InitializeCpuExceptionHandlers.
Considering exception handler initialization for MP situation, this new API
is also necessary, because AP is not supposed to allocate memory. So the
memory needed for stack switch have to be reserved in BSP before waking up
AP and then pass them to InitializeCpuExceptionHandlersEx afterwards.
Since Stack Guard feature is available only for DXE phase at this time, the
new API is fully implemented for DXE only. Other phases implement a dummy
one which just calls InitializeCpuExceptionHandlers().
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Jiewen.yao@intel.com
Stack switch is required by Stack Guard feature. Following two PCDs are
introduced to simplify the resource allocation for initializing stack switch.
gUefiCpuPkgTokenSpaceGuid.PcdCpuStackSwitchExceptionList
gUefiCpuPkgTokenSpaceGuid.PcdCpuKnownGoodStackSize
PcdCpuStackSwitchExceptionList is used to specify which exception will
have separate stack for its handler. For Stack Guard feature, #PF must
be specified at least.
PcdCpuKnownGoodStackSize is used to specify the size of knwon good stack for an
exception handler. Cpu driver or other drivers should use this PCD to reserve
new stack memory for exceptions specified by above PCD.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Jiewen.yao@intel.com
SMM profile and static paging could not be enabled at the same time,
this patch is to add check and comments to make sure it.
Similar comments are also added for the case of static paging and
heap guard for SMM.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Only DumpCpuContext in error case, otherwise there will be too many
debug messages from DumpCpuContext() when SmmProfile feature is enabled
by setting PcdCpuSmmProfileEnable to TRUE. Those debug messages are not
needed for SmmProfile feature as it will record those information to
buffer for further dump.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=540
To consume FIT table for Microcode update,
UefiCpuPkg/Feature/Capsule/MicrocodeUpdateDxe
needs to be updated to consume
IntelSiliconPkg/Include/IndustryStandard/FirmwareInterfaceTable.h,
but UefiCpuPkg could not depend on IntelSiliconPkg.
Since the Microcode update feature is specific to Intel,
we can first move the Microcode update feature code from
UefiCpuPkg to IntelSiliconPkg [first step], then update
the code to consume FIT table [second step].
This patch series is for the first step.
Note: No any code change in this patch, just move.
Next patch will update MicrocodeUpdate to build with the package.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
More than one entry of RT_CODE memory might cause boot problem for some
old OSs. This patch will fix this issue to keep OS compatibility as much
as possible.
More detailed information, please refer to
https://bugzilla.tianocore.org/show_bug.cgi?id=753
Laszlo did a thorough test on OVMF emulated platform. The details can be found
at
https://bugzilla.tianocore.org/show_bug.cgi?id=753#c10
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
"Main.asm" calls TransitionFromReal16To32BitFlat (and does some other
things) before it jumps to the platform's SEC entry point.
TransitionFromReal16To32BitFlat enters big real mode, and sets the DS, ES,
FS, GS, and SS registers to offset ("selector") LINEAR_SEL in the GDT
(defined in "UefiCpuPkg/ResetVector/Vtf0/Ia16/Real16ToFlat32.asm"). The
GDT entry ("segment descriptor") at LINEAR_SEL defines a segment covering
the full 32-bit address space, meant for "read/write data".
Document this fact for all the affected segment registers, as output
parameters for TransitionFromReal16To32BitFlat, saying "Selector allowing
flat access to all addresses".
For 64-bit SEC, "Main.asm" calls Transition32FlatTo64Flat in addition,
between calling TransitionFromReal16To32BitFlat and jumping to the SEC
entry point. Transition32FlatTo64Flat enters long mode. In long mode,
segmentation is largely ignored:
- all segments are considered flat (covering the whole 64-bit address
space),
- with the (possible) exception of FS and GS, whose bases can still be
changed, albeit with new methods, not through the GDT. (Through the
IA32_FS_BASE and IA32_GS_BASE Model Specific Registers, and/or the
WRFSBASE, WRGSBASE and SWAPGS instructions.)
Thus, document the segment registers with the same "Selector allowing flat
access to all addresses" language on the "Main.asm" level too, since that
is valid for both 32-bit and 64-bit modes.
(Technically, "Main.asm" does not return, but RBP/EBP, passed similarly to
the SEC entry point, is already documented as an output parameter.)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Heap guard makes use of paging mechanism to implement its functionality. But
there's no protocol or library available to change page attribute in SMM mode.
A new protocol gEdkiiSmmMemoryAttributeProtocolGuid is introduced to make it
happen. This protocol provide three interfaces
struct _EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL {
EDKII_SMM_GET_MEMORY_ATTRIBUTES GetMemoryAttributes;
EDKII_SMM_SET_MEMORY_ATTRIBUTES SetMemoryAttributes;
EDKII_SMM_CLEAR_MEMORY_ATTRIBUTES ClearMemoryAttributes;
};
Since heap guard feature need to update page attributes. The page table
should not set to be read-only if heap guard feature is enabled for SMM
mode. Otherwise this feature cannot work.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Heap guard feature will frequently update page attributes. The debug message
in CpuDxe driver will slow down the boot performance noticeably. Changing the
debug level to DEBUG_VERBOSE to reduce the message output for normal debug
configuration.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
For some special platforms (such as Ovmf), it is possible
that, some APs start up *and finish* before the remaining
APs start up *at all*. In this case, the enhance
solution by changes 0594ec41 not works as expected.
This change remove check CpuMpData->CpuCount logic to let old
solution still workable if platform owner still set a long
time for PcdCpuApInitTimeOutInMicroSeconds. It's platform
owner's response to decide which solution to use.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jeff Fan <vanjeff_919@hotmail.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
In current implementation, CPU initialized can be done in PEI
or DXE phase. PEI uses CpuFeaturesPei and Dxe uses CpuFeaturesDxe.
If CPU initialized in PEI phase, CpuFeaturesDxe driver will
not be used. This driver will install gEdkiiCpuFeaturesInitDoneGuid
protocol after it initializes the CPU.
Some drivers depend on this protocol to dispatch themselves. If
CpuFeaturesDxe not been used, these drivers will not be dispatched.
This patch fix the above issue. If Cpu initialized in PEI
phase, it also report a guid HOB for CpuFeaturesDxe.
CpuFeaturesDxe will check this HOB first. If it found this
HOB, it just install gEdkiiCpuFeaturesInitDoneGuid protocol,
else it will also do the CPU initialization.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Current logic always waiting for a specific value to collect all APs
count. This logic may caused some platforms cost too much time to
wait for time out.
This patch add new logic to collect APs count. It adds new variable
NumApsExecuting to detect whether all APs have finished initialization.
Each AP let NumApsExecuting++ when begin to initialize itself and let
NumApsExecuting-- when it finish the initialization. BSP base on whether
NumApsExecuting == 0 to finished the collect AP process.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Original AP index variable name not well express the meaning
of the variable. Also this name is better used in later patch.
So change the variable name for better understanding.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
ClearMasks and OrMasks are not 8-byte aligned.
But SetMem64 requires the input address is 8-byte aligned.
If the input is not 8-byte aligned, assertion is hit.
Use SetMem instead.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
MtrrLibSetBelow1MBMemoryAttribute() may be called multiple times.
It's possible that in a 2nd call, Modified[0] is set to TRUE in
1st call but ClearMasks[0] and OrMasks[0] is uninitialized in
2nd call. It causes FixedSettings->Mtrr[0] be set to random
data.
The patch fixes this issue by introducing a local Modified[]
array and only updates FixedSettings->Mtrr[] when LocalModified[i]
is TRUE.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Hao A Wu <hao.a.wu@intel.com>
MicrocodeDetect function will run by every threads, and it will
use PcdGet to get PcdCpuMicrocodePatchAddress and
PcdCpuMicrocodePatchRegionSize, if change both PCD default to dynamic,
system will in non-deterministic behavior.
By design, UEFI/PI services are single threaded and not re-entrant
so Multi processor code should not use UEFI/PI services. Here, Pcd
protocol/PPI is used to access dynamic PCDs so it would result in
non-deterministic behavior.
This code get PCD value in BSP and save them in CPU_MP_DATA for Ap.
https://bugzilla.tianocore.org/show_bug.cgi?id=726
Cc: Crystal Lee <CrystalLee@ami.com.tw>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
ARRAY_SIZE(Mtrrs->Variables.Mtrr) was used in
MtrrDebugPrintAllMtrrsWorker() to parse the MTRR registers.
Instead, the actual variable MTRR count should be used.
Otherwise, the uninitialized random data in MtrrSetting may cause
MtrrLibSetMemoryType() hang.
Steven Shi found this bug in QEMU when using Q35 chip.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Steven Shi <steven.shi@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
The patch optimized the MTRR access code to skip the Base MSR
access when the Mask MSR indicates the pair is invalid.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
The new algorithm converts the problem calculating optimal
MTRR settings (using least MTRR registers) to the problem finding
the shortest path in a graph.
The memory required in extreme but rare case can be up to 256KB,
so using local stack buffer is impossible considering current
DxeIpl only allocates 128KB stack.
The patch changes existing MtrrSetMemoryAttributeInMtrrSettings() and
MtrrSetMemoryAttribute() to use the 4-page stack buffer for
calculation. The two APIs return BUFFER_TOO_SMALL when the buffer
is too small for calculation.
The patch adds a new API MtrrSetMemoryAttribute*s*InMtrrSettings() to
set multiple-range attributes in one function call.
Since every call to MtrrSetMemoryAttributeInMtrrSettings (without-s)
or MtrrSetMemoryAttribute() requires to calculate the MTRRs for the
whole physical memory, combining multiple calls in one API can
significantly reduce the calculation time.
In theory, if N times of call to without-s API costs N seconds,
the new API only costs 1 second.
The new API uses the buffer supplied from caller to calculate
MTRRs and returns BUFFER_TOO_SMALL when the buffer is too small for
calculation.
Test performed:
1. Random test
a. Generate random memory settings, use the new algorithm to
calculate the MTRRs.
b. Read back the MTRRs and check the memory settings match
the desired memory settings.
c. Repeat the above #1 and #2 100000 times.
2. OVMF 32PEI + 64DXE boot to shell.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The patch changes MtrrLibLeastAlignment() to
MtrrLibBiggestAlignment() and optimizes the implementation
to be more efficient.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The patch replaces some if-checks with assertions because
they are impossible to happen.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Current code assume Communicate Ppi always existed, so it adds
ASSERT to confirm it. Ovmf platform happened not has this Ppi, so
the ASSERT been trig. This patch handle Ppi not existed case.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Driver will send S3 resume finished event to SmmCore through communicate
buffer after it signals EndOfPei event.
V2 Changes:
1. Change structures name to avoid they start with EFI_.
2. Base on DXE phase bits to provide communication buffer, current implement
check both PEI and DXE phase.
V3 Changes:
1. Change structure name for better understanding.
2. Enhance communication buffer calculate logic to more accurate.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The mechanism behind is the same as NULL pointer detection enabled in EDK-II
core. SMM has its own page table and we have to disable page 0 again in SMM
mode.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Current code logic not check the pointer before use it. This may
has potential issue, this patch add code to check it.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Hao Wu <hao.a.wu@intel.com>
This patch is to fix an assert issue during booting IA32 platforms
such as OvmfIa32 or Quark. This issue is caused by trying to access
page table on a platform without page table. A check is added to
avoid the assert.
Bug tracker: https://bugzilla.tianocore.org/show_bug.cgi?id=724
Cc: Star Zeng <star.zeng@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Replace hard-coded machine code with equivalent assembly source code.
Changes tested by checking for machine code equivalence by disassembling
the original and changed code.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Chris Ruffin <chris.ruffin@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
V2:
Change function parameter to avoid touch global info in function.
Enhance function name, make it more user friendly
V1:
Refine code to avoid duplicate code to set processor register.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
In S3 resume path, current implementation do 2 separate INIT-SIPI-SIPI,
this is not necessary. This change combine these 2 INIT-SIPI-SIPI to 1
and add CpuPause between them.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
The ConfigData parameter initialized in *GetConfigData function should not be NULL in
later *Support, *Initilize function, so just add ASSERT code check in these functions.
Cc: Ming Shao <ming.shao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
There're uninitialized variables warning reported by GCC.
This patch will fix it. The original commit is
c1cab54ce5
Cc: Hao Wu <hao.a.wu@intel.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Hao Wu <hao.a.wu@intel.com>
From CpuDxe driver perspective, it doesn't update GCD memory attributes from
current page table setup during its initialization. So the memory attributes in
GCD might not reflect all memory attributes in real world.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Suggested-by: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
"Detect CPU count: %d\n" is an informative message, not an error report.
Set its debug mask to DEBUG_INFO.
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Merge the code to MachineCheck.c file, remove this file.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
GetProcessorLocationByApicId ()
- Use max possible thread count to decode InitialApicId on AMD processor.
- Clean-up on C Coding standards.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
PI has description said If an AP is enabled, then the implementation must
guarantee that a complete initialization sequence is performed on the AP,
so the AP is in a state that is compatible with an MP operating system.
Current implementation just set the AP to idle state when enable this AP
which is not follow spec. This patch fix it.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Originally (before 714c260301),
mPhysicalAddressBits was only defined in X64 PageTbl.c, after
714c260301, mPhysicalAddressBits is
also defined in Ia32 PageTbl.c, then mPhysicalAddressBits is used in
ConvertMemoryPageAttributes() for address check.
This patch is to centralize mPhysicalAddressBits definition to
PiSmmCpuDxeSmm.c from Ia32 and X64 PageTbl.c.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Original code about Local Machine Check exception feature saves in a
discrete file, because features related to machine check architecture
all saved in MachineCheck.c file. This patch moved LMCE logic to same
file for easy maintenance.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Add CPUID check to see if the CPU supports the Machine Check
Architecture before accessing the Machine Check Architecture
related MSRs.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
These two definitions have redundant definition which can be handle by code.
This patch update them to follow new code definitions.
V2: Add more comments for the PCDs and keep consistent in .dec and .uni files.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
The EnumProcTraceMemDisable/OutputSchemeInvalid are redundant
definitions. These definitions can be handled by other code,
so remove them.
V2: Change enum members name.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
When update MSR values, current code use BITxx to modify it. Enhance the code
to use corresponding MSR's data structures to make it more user friendly.
V2: Move architecturalMsr.h file. definition to architecturalMsr.h file.
Use structure members to do value assignment.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Current calculate timeout logic may have overflow if the input
timeout value too large. This patch fix this potential overflow
issue.
V2: Use local variable instead of call GetPerformanceCounterProperties
twice. Also correct some comments.
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=624 reports
memory protection crash in PiSmmCpuDxeSmm, Ia32 build with
RAM above 4GB (of which 2GB are placed in 64-bit address).
It is because UEFI builds identity mapping page tables,
>4G address is not supported at Ia32 build.
This patch is to get the PhysicalAddressBits that is used
to build in PageTbl.c(Ia32/X64), and use it to check whether
the address is supported or not in ConvertMemoryPageAttributes().
With this patch, the debug messages will be like below.
UefiMemory protection: 0x0 - 0x9F000 Success
UefiMemory protection: 0x100000 - 0x807000 Success
UefiMemory protection: 0x808000 - 0x810000 Success
UefiMemory protection: 0x818000 - 0x820000 Success
UefiMemory protection: 0x1510000 - 0x7B798000 Success
UefiMemory protection: 0x7B79B000 - 0x7E538000 Success
UefiMemory protection: 0x7E539000 - 0x7E545000 Success
UefiMemory protection: 0x7E55A000 - 0x7E61F000 Success
UefiMemory protection: 0x7E62B000 - 0x7F6AB000 Success
UefiMemory protection: 0x7F703000 - 0x7F70B000 Success
UefiMemory protection: 0x7F70F000 - 0x7F778000 Success
UefiMemory protection: 0x100000000 - 0x180000000 Unsupported
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Originally-suggested-by: Jiewen Yao <jiewen.yao@intel.com>
Reported-by: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=674
Add CPUID check to see if the CPU supports the Machine
Check Architecture before accessing the Machine Check
Architecture related MSRs.
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Current code allocate buffer for the pointer which later get value
from PCD database. but current code error use "=" for this case.
Use AllocateCopyPool instead to fix it.
V2 enhanced to directly use AllocateCopyPool to get the PCD value.
V3 enhanced to avoid using local temp variable.
V4 enhanced to keep the functions to get the pcd values.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Shao Ming <ming.shao@intel.com>
Cc: Kinney Michael D <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Kinney Michael D <michael.d.kinney@intel.com>
UefiCpuLib inf file reference itself in [LibraryClasses]
section, this is not necessary. This patch remove it.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Ming Shao <ming.shao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>