If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory
of EfiBootServicesData, EfiConventionalMemory, the BIOS will reset after
timer initialized and started.
The root cause is that the memory used to hold the exception and interrupt
handler is allocated with type of EfiBootServicesData and marked as
non-executable due to NX feature enabled. This patch fixes it by allocating
EfiBootServicesCode type of memory for those handlers instead.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory
of EfiBootServicesCode, EfiConventionalMemory, the BIOS will hang at a page
fault exception during MP initialization.
The root cause is that the AP wake up buffer, which is below 1MB and used
to hold both AP init code and data, is type of EfiConventionalMemory (not
really allocated because of potential conflict with legacy code), and is
marked as non-executable. During the transition from real address mode
to long mode, the AP init code has to enable paging which will then cause
itself a page fault exception because it's just running in non-executable
memory.
The solution is splitting AP wake up buffer into two part: lower part is
still below 1MB and shared with legacy system, higher part is really
allocated memory of BootServicesCode type. The init code in the memory
below 1MB will not enable paging but just switch to protected mode and
jump to higher memory, in which the init code will enable paging and
switch to long mode.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
In 32-bit mode, the BIOS will not create page table for memory beyond
4GB and therefore it cannot handle the attributes change request for
those memory. But current CpuDxe doesn't check this situation and still
try to complete the request, which will cause attributes of incorrect
memory address to be changed due to type cast from 64-bit to 32-bit.
This patch fixes this issue by checking the end address of input
memory block and returning EFI_UNSUPPORTED if it's out of range.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Commits a2ea6894e6
* UefiCpuPkg/MpInitLib: Fix a bug that AP enters timer INT handler
masked the interrupts in AP.
But it didn't unmask the interrupt in new BSP when Switch BSP
happens.
The patch fixed this issue.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Cc: Eric Dong <eric.dong@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=849
In V2, use "mov rax, strict qword 0" to replace the hard code db.
1. Use lea instruction to get the address instead of mov instruction.
2. Use the dummy address as jmp destination, and add the logic to fix up
the address to the absolute address at boot time.
3. On MpFuncs.nasm, use ExchangeInfo to record InitializeFloatingPointUnits.
This way is same to MpInitLib.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=849
In V2, use "mov rax, strict qword 0" to replace the hard code db.
1. Use lea instruction to get the address instead of mov instruction.
2. Use the dummy address as jmp destination, and add the logic to fix up
the address to the absolute address at boot time.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=849
In V2, use mov rax, strict qword 0 to replace the hard code db.
Use the dummy address as jmp destination, and add the logic to fix up
the address to the absolute address at boot time.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Enhance MCA feature dependency check base on SDM pseudocode example 15-1.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
AllocateCodePages() is used to allocate buffer for IDT range,
the code pages will be set to RO in SetMemMapAttributes(),
then the code to set IDT range to RO in PatchGdtIdtMap() is
redundant and could be removed.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
When StackGuard is enabled on IA32, the #double fault exception
is reported instead of #page fault.
This issue does not exist on X64, or IA32 without StackGuard.
The fix at e4435f710c was incomplete.
It is because AllocateCodePages() is used to allocate buffer for
GDT and TSS, the code pages will be set to RO in SetMemMapAttributes().
But IA32 Stack Guard need use task switch to switch stack that need
write GDT and TSS, so AllocateCodePages() could not be used.
This patch uses AllocatePages() instead of AllocateCodePages() to
allocate buffer for GDT and TSS if StackGuard is enabled on IA32.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
0 40 f0 100
+---WT--+--UC--+--WT--+-----WB----+----UC----+
When calculating the shortest path from 0 to 100, the
MtrrLibCalculateLeastMtrrs() is called to update the
Vertices.Previous.
When calculating the shortest path from 0 to 40,
MtrrLibCalculateLeastMtrrs() is called recursively to update the
Vertices.Previous.
The second call corrupt the Previous value that will be used
later.
The patch removes the code that corrupts Previous.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
80 A8 B0 B8 C0
+----------WB--------+-UC-+-WT-+-WB-+
For above memory settings, current code caused the final MTRR
settings miss [A8, B0, UC] when default memory type is UC.
The root cause is the code only checks the mandatory weight
between A8 to B0, but skips to check the optional weight.
The patch fixes this issue.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
The patch only change the comments and variable name so
doesn't impact the functionality.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
*SetMemoryAttribute*() API cannot handle the setting request that
looks like <0, MAX_ADDRESS, Type>. The buggy parameter checking
logic returns Unsupported for this case.
The patch fixes the checking logic to handle such case.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Code forgot to initialize the optional weight between adjacent
vertices. It caused wrong MTRR result was calculated for some
memory settings.
The logic was incorrectly removed when converting from POC
code. The patch adds back the initialization.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
MtrrSetMemoryAttributesInMtrrSettings() missed the debug messages
of memory attribute request and status. The patch moves all debug
messages from MtrrSetMemoryAttributeInMtrrSettings() to
MtrrSetMemoryAttributesInMtrrSettings() and refines the debug message
to carry more information.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
The reason is that DXE part initialization will reuse the stack allocated
at PEI phase, if MP was initialized before. Some code added to check this
situation and use stack base address saved in HOB passed from PEI.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
As the name suggests, CpuMpData->CpuInfoInHob[0].ApTopOfStack must be init
to the top of stack. But the MpInitLibInitialize() passed the base address
of stack to InitializeApData(), which is not correct. Although this stack
is not used for BSP, it's should be fixed in case of misunderstanding and
future possible code changes.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
When SourceLevelDebug is enabled, AP randomly executes the DXECORE
timer handler logic. The root cause is the interrupts are not
masked in AP wake up procedure.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Enhance DumpModuleImageInfo() for page fault with I/D set.
If it is page fault with I/D set, the (E/R)IP in SystemContext
could not be used for DumpModuleImageInfo(), instead of, the next
IP of the IP triggering this page fault could be found from stack
by (E/R)SP in SystemContext.
IA32 SDM:
— I/D flag (bit 4).
This flag is 1 if the access causing the page-fault exception was
an instruction fetch. This flag describes the access causing the
page-fault exception, not the access rights specified by paging.
The idea comes from SmiPFHandler () in
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c and
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Fix comment typo for MtrrLibApplyFixedMtrrs function
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Roll back commit 56649f4301.
The original names follows the spec definition.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
With correct model CPU, current checking logic will
always execute AsmReadMsr64 operation and then check
ECX.AESNI[bit 25] = 1. Update checking logic to check
ECX.AESNI[bit 25] = 1 first and then do AsmReadMsr64
operation.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
When CpuCommonFeaturesLib use RegisterCpuFeaturesLib to register
CPU features, the CpuFeaturesData->BitMaskSize has already been
initialized. So delete redundant PcdGetSize PcdCpuFeaturesSupport
in CpuInitDataInitialize.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
This reverts commit 5c59537c10.
Current code already has function IsCpuFeatureSupported to do
the feature validation, not need this check logic anymore.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Due to coding style fix of the structure definition in BaseLib.h, all
code referencing those structure must be updated accordingly.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>
AP has its own stack for code execution. If PcdCpuStackGuard is enabled,
the page at the bottom of stack of AP will be disabled (NOT PRESENT) to
monitor the stack overflow issue. This requires PcdCpuApStackSize to be
set with value more than one page of memory.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Change GetSupportPcds and GetConfigurationPcds to be singular
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
V2:
Update function name, add more detail description.
V1:
Check and assert invalid RegisterCpuFeature function parameter
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bell Song <binx.song@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Rename SmmEndOfS3ResumeProtocolGuid to EndOfS3ResumeGuid as the GUID
may be used to install PPI in future to notify PEI phase code.
The references in UefiCpuPkg are also being updated.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
One of the functionalities of CpuDxe is to update memory paging attributes.
If page table protection is applied, it must be disabled temporarily before
any attributes update and enabled again afterwards.
This patch makes use of the same way as DxeIpl to allocate page table memory
from reserved memory pool, which helps to reduce potential "split" operation
and recursive calling of SetMemorySpaceAttributes().
Laszlo (lersek@redhat.com) did a regression test on QEMU virtual platform with
one middle version of this series patch. The details can be found at
https://lists.01.org/pipermail/edk2-devel/2017-December/018625.html
There're a few changes after his work.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
When printing the ascii format of memory attribute in debug message,
%s was used, but %a should be used.
The patch additionally changes %x to %r for EFI_STATUS.
The whole patch doesn't impact functionality of the MtrrLib.
Just debug message fix.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Cc: Ming Shao <ming.shao@intel.com>
In current MP implementation, BSP and AP shares the same exception
configuration. Stack switch required by Stack Guard feature needs that BSP
and AP have their own configuration. This patch adds code to ask BSP and AP
to do exception handler initialization separately.
Since AP is not supposed to do memory allocation, all memory needed to
setup stack switch will be reserved in BSP and pass to AP via new API
EFI_STATUS
EFIAPI
InitializeCpuExceptionHandlersEx (
IN EFI_VECTOR_HANDOFF_INFO *VectorInfo OPTIONAL,
IN CPU_EXCEPTION_INIT_DATA *InitData OPTIONAL
);
Following two new PCDs are introduced to configure how to setup new stack
for specified exception handlers.
gUefiCpuPkgTokenSpaceGuid.PcdCpuStackSwitchExceptionList
gUefiCpuPkgTokenSpaceGuid.PcdCpuKnownGoodStackSize
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Jiewen.yao@intel.com
In current implementation of CPU MP service, AP is initialized with data
copied from BSP. Stack switch required by Stack Guard feature needs different
GDT, IDT table and task gates for each logic processor. This patch adds GDTR,
IDTR and TR into structure CPU_VOLATILE_REGISTERS and related code in save
and restore methods. This can make sure that any changes to GDT, IDT and task
gate for an AP will be kept from overwritten by BSP settings.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Jiewen.yao@intel.com
If Stack Guard is enabled and there's really a stack overflow happened during
boot, a Page Fault exception will be triggered. Because the stack is out of
usage, the exception handler, which shares the stack with normal UEFI driver,
cannot be executed and cannot dump the processor information.
Without those information, it's very difficult for the BIOS developers locate
the root cause of stack overflow. And without a workable stack, the developer
cannot event use single step to debug the UEFI driver with JTAG debugger.
In order to make sure the exception handler to execute normally after stack
overflow. We need separate stacks for exception handlers in case of unusable
stack.
IA processor allows to switch to a new stack during handling interrupt and
exception. But X64 and IA32 provides different ways to make it. X64 provides
interrupt stack table (IST) to allow maximum 7 different exceptions to have
new stack for its handler. IA32 doesn't have IST mechanism and can only use
task gate to do it since task switch allows to load a new stack through its
task-state segment (TSS).
The new API, InitializeCpuExceptionHandlersEx, is implemented to complete
extra initialization for stack switch of exception handler. Since setting
up stack switch needs allocating new memory for new stack, new GDT table
and task-state segment but the initialization method will be called in
different phases which have no consistent way to reserve those memory, this
new API is allowed to pass the reserved resources to complete the extra
works. This is cannot be done by original InitializeCpuExceptionHandlers.
Considering exception handler initialization for MP situation, this new API
is also necessary, because AP is not supposed to allocate memory. So the
memory needed for stack switch have to be reserved in BSP before waking up
AP and then pass them to InitializeCpuExceptionHandlersEx afterwards.
Since Stack Guard feature is available only for DXE phase at this time, the
new API is fully implemented for DXE only. Other phases implement a dummy
one which just calls InitializeCpuExceptionHandlers().
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Jiewen.yao@intel.com
Stack switch is required by Stack Guard feature. Following two PCDs are
introduced to simplify the resource allocation for initializing stack switch.
gUefiCpuPkgTokenSpaceGuid.PcdCpuStackSwitchExceptionList
gUefiCpuPkgTokenSpaceGuid.PcdCpuKnownGoodStackSize
PcdCpuStackSwitchExceptionList is used to specify which exception will
have separate stack for its handler. For Stack Guard feature, #PF must
be specified at least.
PcdCpuKnownGoodStackSize is used to specify the size of knwon good stack for an
exception handler. Cpu driver or other drivers should use this PCD to reserve
new stack memory for exceptions specified by above PCD.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Jiewen.yao@intel.com
SMM profile and static paging could not be enabled at the same time,
this patch is to add check and comments to make sure it.
Similar comments are also added for the case of static paging and
heap guard for SMM.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Only DumpCpuContext in error case, otherwise there will be too many
debug messages from DumpCpuContext() when SmmProfile feature is enabled
by setting PcdCpuSmmProfileEnable to TRUE. Those debug messages are not
needed for SmmProfile feature as it will record those information to
buffer for further dump.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=540
To consume FIT table for Microcode update,
UefiCpuPkg/Feature/Capsule/MicrocodeUpdateDxe
needs to be updated to consume
IntelSiliconPkg/Include/IndustryStandard/FirmwareInterfaceTable.h,
but UefiCpuPkg could not depend on IntelSiliconPkg.
Since the Microcode update feature is specific to Intel,
we can first move the Microcode update feature code from
UefiCpuPkg to IntelSiliconPkg [first step], then update
the code to consume FIT table [second step].
This patch series is for the first step.
Note: No any code change in this patch, just move.
Next patch will update MicrocodeUpdate to build with the package.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
More than one entry of RT_CODE memory might cause boot problem for some
old OSs. This patch will fix this issue to keep OS compatibility as much
as possible.
More detailed information, please refer to
https://bugzilla.tianocore.org/show_bug.cgi?id=753
Laszlo did a thorough test on OVMF emulated platform. The details can be found
at
https://bugzilla.tianocore.org/show_bug.cgi?id=753#c10
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
"Main.asm" calls TransitionFromReal16To32BitFlat (and does some other
things) before it jumps to the platform's SEC entry point.
TransitionFromReal16To32BitFlat enters big real mode, and sets the DS, ES,
FS, GS, and SS registers to offset ("selector") LINEAR_SEL in the GDT
(defined in "UefiCpuPkg/ResetVector/Vtf0/Ia16/Real16ToFlat32.asm"). The
GDT entry ("segment descriptor") at LINEAR_SEL defines a segment covering
the full 32-bit address space, meant for "read/write data".
Document this fact for all the affected segment registers, as output
parameters for TransitionFromReal16To32BitFlat, saying "Selector allowing
flat access to all addresses".
For 64-bit SEC, "Main.asm" calls Transition32FlatTo64Flat in addition,
between calling TransitionFromReal16To32BitFlat and jumping to the SEC
entry point. Transition32FlatTo64Flat enters long mode. In long mode,
segmentation is largely ignored:
- all segments are considered flat (covering the whole 64-bit address
space),
- with the (possible) exception of FS and GS, whose bases can still be
changed, albeit with new methods, not through the GDT. (Through the
IA32_FS_BASE and IA32_GS_BASE Model Specific Registers, and/or the
WRFSBASE, WRGSBASE and SWAPGS instructions.)
Thus, document the segment registers with the same "Selector allowing flat
access to all addresses" language on the "Main.asm" level too, since that
is valid for both 32-bit and 64-bit modes.
(Technically, "Main.asm" does not return, but RBP/EBP, passed similarly to
the SEC entry point, is already documented as an output parameter.)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Heap guard makes use of paging mechanism to implement its functionality. But
there's no protocol or library available to change page attribute in SMM mode.
A new protocol gEdkiiSmmMemoryAttributeProtocolGuid is introduced to make it
happen. This protocol provide three interfaces
struct _EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL {
EDKII_SMM_GET_MEMORY_ATTRIBUTES GetMemoryAttributes;
EDKII_SMM_SET_MEMORY_ATTRIBUTES SetMemoryAttributes;
EDKII_SMM_CLEAR_MEMORY_ATTRIBUTES ClearMemoryAttributes;
};
Since heap guard feature need to update page attributes. The page table
should not set to be read-only if heap guard feature is enabled for SMM
mode. Otherwise this feature cannot work.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Heap guard feature will frequently update page attributes. The debug message
in CpuDxe driver will slow down the boot performance noticeably. Changing the
debug level to DEBUG_VERBOSE to reduce the message output for normal debug
configuration.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>