PEI Stack Guard needs to enable paging before DxeIpl. This might cause
#GP in the transition from 32-bit PEI to 64-bit DXE due to the code
trying to write CR3 register with PML4 page table while the processor
is enabled with PAE paging.
Simply disabling paging before updating CR3 can solve this conflict.
There's no such issue for 64-bit PEI so this change applies only to
32-bit code.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: "Ware, Ryan R" <ryan.r.ware@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
In V2, Remove GetImageReadFunction(), directly use PeiImageRead().
The copy PeiImageReadForShadow function doesn't improve the boot performance.
This patch removes this copy logic to simplify the code logic.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Removing rules for Ipf sources file:
* Remove the source file which path with "ipf" and also listed in
[Sources.IPF] section of INF file.
* Remove the source file which listed in [Components.IPF] section
of DSC file and not listed in any other [Components] section.
* Remove the embedded Ipf code for MDE_CPU_IPF.
Removing rules for Inf file:
* Remove IPF from VALID_ARCHITECTURES comments.
* Remove DXE_SAL_DRIVER from LIBRARY_CLASS in [Defines] section.
* Remove the INF which only listed in [Components.IPF] section in DSC.
* Remove statements from [BuildOptions] that provide IPF specific flags.
* Remove any IPF sepcific sections.
Removing rules for Dec file:
* Remove [Includes.IPF] section from Dec.
Removing rules for Dsc file:
* Remove IPF from SUPPORTED_ARCHITECTURES in [Defines] section of DSC.
* Remove any IPF specific sections.
* Remove statements from [BuildOptions] that provide IPF specific flags.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chen A Chen <chen.a.chen@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
fwvol.c(1572) : warning C4701: potentially uninitialized
local variable 'Status' used
The build failure is caused by
0e042d0ad7 for
https://bugzilla.tianocore.org/show_bug.cgi?id=1131
This patch initializes Status to fix the build failure.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1131
PI spec and BaseTools support to generate multiple FV images
in one FV file.
This patch is to update DxeCore to handle the case.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1131
PI spec and BaseTools support to generate multiple FV images
in one FV file.
This patch is to update PeiCore to handle the case.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Call BS.AllocatePages in DXE driver and call SMM FreePages with the address of the buffer allocated in the DXE driver. SMM FreePages success and add a non-SMRAM range into SMM heap list. This is not an expected behavior. SMM FreePages should return error for this case and not free the pages.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1098
Change-Id: Ie5ffa1ac62c558aa418a8a3d7d0e8158b846e13b
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
The functions that are never called have been removed.
They are IsImageInsideSmram,FindImageRecord,SmmRemoveImageRecord,
SmmMemoryAttributesTableConsistencyCheck,DumpSmmMemoryMapEntry,
SmmMemoryMapConsistencyCheckRange,SmmMemoryMapConsistencyCheck,
DumpSmmMemoryMap,ClearGuardMapBit,SetGuardMapBit,AdjustMemoryA,
AdjustMemoryS,IsHeadGuard and IsTailGuard.
FindImageRecord() is called by SmmRemoveImageRecord(); however,
nothing calls SmmRemoveImageRecord().
SmmMemoryMapConsistencyCheckRange() is called by
SmmMemoryMapConsistencyCheck(); however, nothing calls
SmmMemoryMapConsistencyCheck().
https://bugzilla.tianocore.org/show_bug.cgi?id=1062
v2:append the following to the commit message.
- FindImageRecord() is called by SmmRemoveImageRecord(); however,
nothing calls SmmRemoveImageRecord().
- SmmMemoryMapConsistencyCheckRange() is called by
SmmMemoryMapConsistencyCheck(); however, nothing calls
SmmMemoryMapConsistencyCheck().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: shenglei <shenglei.zhang@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
The functions that are never called have been removed.
They are ClearGuardMapBit,SetGuardMapBit,IsHeadGuard,
IsTailGuard and CoreEfiNotAvailableYetArg0.
https://bugzilla.tianocore.org/show_bug.cgi?id=1062
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: shenglei <shenglei.zhang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
We want to provide precise info in MemAttribTable
to both OS and SMM, and SMM only gets the info at EndOfDxe.
So we do not update RtCode entry in EndOfDxe.
The impact is that if 3rd part OPROM is runtime, it cannot be executed
at UEFI runtime phase.
Currently, we do not see compatibility issue, because the only runtime
OPROM we found before in UNDI, and UEFI OS will not use UNDI interface
in OS.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
So that the SMM can consume it to set page protection for
the UEFI runtime page with EFI_MEMORY_RO attribute.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Add an example case for the usage of
PERF_EVENT_SIGNAL_BEGIN/PERF_EVENT_SIGNAL_END
Cc: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Current code assumes PpiDescriptor and Ppi are in same range
(heap/stack/hole).
This patch removes the assumption.
Descriptor needs to be converted first. It is also handled by this patch.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Qing Huang <qing.huang@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The perf measurement entry in SmmEntryPoint function
doesn't have significant meaning. So remove it now.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
1. Do not use tab characters
2. No trailing white space in one line
3. All files must end with CRLF
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Replace old Perf macros with the new added ones.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
CpuDxe driver is updated to be able to access DXE page table in SMM mode,
which means Heap Guard can get correct memory paging attributes in what
environment. It's not necessary to exclude SMM from detecting Heap Guard
feature support.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
NASM has replaced ASM and S files.
1. Remove ASM from all modules.
2. Remove S files from the drivers only.
3. https://bugzilla.tianocore.org/show_bug.cgi?id=881
After NASM is updated, S files can be removed from Library.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Until now the possible errors returned from processing
boot firmware volume were not checked, which could cause
misbehavior in further booting stages. Add relevant assert.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Jan Dabros <jsd@semihalf.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
The PEIM in all cached FV image may be in registered for shadow status.
Current logic CurrentPeimFvCount is not enough.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This patch fixes an issue introduced by commit
5b91bf82c6
and
0c9f2cb10b
This issue will only happen if PcdDxeNxMemoryProtectionPolicy is
enabled for reserved memory, which will mark SMM RAM as NX (non-
executable) during DXE core initialization. SMM IPL driver will
unset the NX attribute for SMM RAM to allow loading and running
SMM core/drivers.
But above commit will fail the unset operation of the NX attribute
due to a fact that SMM RAM has zero cache attribute (MRC code always
sets 0 attribute to reserved memory), which will cause GCD internal
method ConverToCpuArchAttributes() to return 0 attribute, which is
taken as invalid CPU paging attribute and skip the calling of
gCpu->SetMemoryAttributes().
The solution is to make use of existing functionality in PiSmmIpl
to make sure one cache attribute is set for SMM RAM. For performance
consideration, PiSmmIpl will always try to set SMM RAM to write-back.
But there's a hole in the code which will fail the setting write-back
attribute because of no corresponding cache capabilities. This patch
will add necessary cache capabilities before setting corresponding
attributes.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Heap Guard feature needs enough memory and paging to work. Otherwise
calling SetMemoryAttributes to change page attribute will fail. This
patch add necessary check of result of calling SetMemoryAttributes.
This can help users to debug their problem in enabling this feature.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Heap Guard feature needs enough memory and paging to work. Otherwise
calling SetMemoryAttributes to change page attribute will fail. This
patch add necessary check of result of calling SetMemoryAttributes.
This can help users to debug their problem in enabling this feature.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
It is caused by 0c9f2cb10b
and false positive.
Initialize CpuArchAttributes to suppress incorrect
compiler/analyzer warnings.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>
This patch fixes an issue with VlvTbltDevicePkg introduced
by commit 5b91bf82c6.
The history is as below.
To support heap guard feature, 14dde9e903
added support for SetMemorySpaceAttributes() to handle page attributes,
but after that, a combination of CPU arch attributes and other attributes
was not allowed anymore, for example, UC + RUNTIME. It is a regression.
Then 5b91bf82c6 was to fix the regression,
and we thought 0 CPU arch attributes may be used to clear CPU arch
attributes, so 0 CPU arch attributes was allowed to be sent to
gCpu->SetMemoryAttributes().
But some implementation of CPU driver may return error for 0 CPU arch
attributes. That fails the case that caller just calls
SetMemorySpaceAttributes() with none CPU arch attributes (for example,
RUNTIME), and the purpose of the case is not to clear CPU arch attributes.
This patch filters the call to gCpu->SetMemoryAttributes()
if the requested attributes is 0. It also removes the #define
INVALID_CPU_ARCH_ATTRIBUTES that is no longer used.
Cc: Heyi Guo <heyi.guo@linaro.org>
Cc: Yi Li <phoenix.liyi@huawei.com>
Cc: Renhao Liang <liangrenhao@huawei.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
For gDS->SetMemorySpaceAttributes(), when user passes a combined
memory attribute including CPU arch attribute and other attributes,
like EFI_MEMORY_RUNTIME, ConverToCpuArchAttributes() will return
INVALID_CPU_ARCH_ATTRIBUTES and skip setting page/cache attribute for
the specified memory space.
We don't see any reason to forbid combining CPU arch attributes and
non-CPU-arch attributes when calling gDS->SetMemorySpaceAttributes(),
so we remove the check code in ConverToCpuArchAttributes(); the
remaining code is enough to grab the interested bits for
Cpu->SetMemoryAttributes().
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Signed-off-by: Yi Li <phoenix.liyi@huawei.com>
Signed-off-by: Renhao Liang <liangrenhao@huawei.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Within function CoreExitBootServices(), this commit will move the call
of:
MemoryProtectionExitBootServicesCallback();
before:
SaveAndSetDebugTimerInterrupt (FALSE);
and
gCpu->DisableInterrupt (gCpu);
The reason is that, within MemoryProtectionExitBootServicesCallback(),
APIs like RaiseTpl and RestoreTpl maybe called. An example will be:
DebugLib (using PeiDxeDebugLibReportStatusCode instance)
|
v
ReportStatusCodeLib (using DxeReportStatusCodeLib instance)
|
v
Raise/RestoreTpl
The call of Raise/RestoreTpl APIs will re-enable BSP interrupts. Hence,
this commit refine the calling sequence to ensure BSP interrupts before
leaving CoreExitBootServices().
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
SMM core will add a HEADER before each allocated pool memory and clean
up this header once it's freed. If a block of allocated pool is marked
as read-only after allocation (EfiRuntimeServicesCode type of pool in
SMM will always be marked as read-only), #PF exception will be triggered
during memory pool freeing.
Normally EfiRuntimeServicesCode type of pool should not be freed in the
real world. But some test suites will actually do memory free for all
types of memory for the purpose of functionality and conformance test.
So this issue should be fixed anyway.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
If given address is on 64K boundary and the requested bit number is 64,
all SetBits(), ClearBits() and GetBits() will encounter ASSERT problem
in trying to do a 64 bits of shift, which is not allowed by LShift() and
RShift(). This patch tries to fix this issue by turning bits operation
into whole integer operation in such situation.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
If given address is on 64K boundary and the requested bit number is 64,
all SetBits(), ClearBits() and GetBits() will encounter ASSERT problem
in trying to do a 64 bits of shift, which is not allowed by LShift() and
RShift(). This patch tries to fix this issue by turning bits operation
into whole integer operation in such situation.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Due to the fact that HeapGuard needs CpuArchProtocol to update page
attributes, the feature is normally enabled after CpuArchProtocol is
installed. Since there're some drivers are loaded before CpuArchProtocl,
they cannot make use HeapGuard feature to detect potential issues.
This patch fixes above situation by updating the DXE core to skip the
NULL check against global gCpu in the IsMemoryTypeToGuard(), and adding
NULL check against gCpu in SetGuardPage() and UnsetGuardPage() to make
sure that they can be called but do nothing. This will allow HeapGuard to
record all guarded memory without setting the related Guard pages to not-
present.
Once the CpuArchProtocol is installed, a protocol notify will be called
to complete the work of setting Guard pages to not-present.
Please note that above changes will cause a #PF in GCD code during cleanup
of map entries, which is initiated by CpuDxe driver to update real mtrr
and paging attributes back to GCD. During that time, CpuDxe doesn't allow
GCD to update memory attributes and then any Guard page cannot be unset.
As a result, this will prevent Guarded memory from freeing during memory
map cleanup.
The solution is to avoid allocating guarded memory as memory map entries
in GCD code. It's done by setting global mOnGuarding to TRUE before memory
allocation and setting it back to FALSE afterwards in GCD function
CoreAllocateGcdMapEntry().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This patch fixes the same issues in Heap Guard in DXE core, which is fixed
in another patch.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
There're two ASSERT issues which will be triggered by boot loader of
Windows 10.
The first is caused by allocating memory in heap guard during another
memory allocation, which is not allowed in DXE core. Avoiding reentry
of memory allocation has been considered in heap guard feature. But
there's a hole in the code of function FindGuardedMemoryMap(). The fix
is adding AllocMapUnit parameter in the condition of while(), which
will prevent memory allocation from happenning during Guard page
check operation.
The second is caused by the core trying to allocate page 0 with Guard
page, which will cause the start address rolling back to the end of
supported system address. According to the requirement of heap guard,
the fix is just simply skipping the free memory at page 0 and let
the core continue searching free memory after it.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The root cause is an unnecessary check to Size parameter in function
AdjustMemoryS(). It will cause one standalone free page (happen to have
Guard page around) in the free memory list cannot be allocated, even if
the requested memory size is less than a page.
//
// At least one more page needed for Guard page.
//
if (Size < (SizeRequested + EFI_PAGES_TO_SIZE (1))) {
return 0;
}
The following code in the same function actually covers above check
implicitly. So the fix is simply removing above check.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
If enabled, NX memory protection feature will mark some types of active
memory as NX (non-executable), which includes the first page of the stack.
This will overwrite the attributes of the first page of the stack if the
stack guard feature is also enabled.
The solution is to override the attributes setting to the first page of
the stack by adding back the 'EFI_MEMORY_RP' attribute when the stack
guard feature is enabled.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
The commit rewrites the logic in function
InitializeDxeNxMemoryProtectionPolicy() for handling the first page
(page 0) when NULL pointer detection feature is enabled.
Instead of skip setting the page 0, the codes will now override the
attribute setting of page 0 by adding the 'EFI_MEMORY_RP' attribute.
The purpose is to make it easy for other special handling of pages
(e.g. the first page of the stack when stack guard feature is enabled).
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
If PEIM image address doesn't meet with its section alignment, it will
load fail. PeiCore adds more debug message to report it.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Update PEI Service ResetSystem() to always attempt to use
the Reset2 PPI before looking for the Reset PPI.
Cc: Liming Gao <liming.gao@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
The Heap Guard feature wrapped SmmInternalFreePagesEx with
SmmInternalFreePagesExWithGuard but didn't add necessary
parameter check. This patch fixes this situation.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
SmiHandlerUnRegister() validates the DispatchHandle by checking
whether the first 32bit matches to a certain signature
(SMI_HANDLER_SIGNATURE).
But if a caller calls *UnRegister() twice and the memory freed by
first call still contains the signature, the second call may hang.
The patch fixes this issue by locating the DispatchHandle
in all SMI handlers, instead of checking the signature.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Considering following scenario (both NX memory protection and heap guard
are enabled):
1. Allocate 3 pages. The attributes of adjacent memory pages will be
|NOT-PRESENT| present | present | present |NOT-PRESENT|
2. Free the middle page. The attributes of adjacent memory pages should be
|NOT-PRESENT| present |NOT-PRESENT| present |NOT-PRESENT|
But the NX feature will overwrite the attributes of middle page. So it
looks still like below, which is wrong.
|NOT-PRESENT| present | PRESENT | present |NOT-PRESENT|
The solution is checking the first and/or last page of a memory block to be
marked as NX, and skipping them if they are Guard pages.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
If enabled, NX memory protection feature will mark all free memory as
NX (non-executable), including page 0. This will overwrite the attributes
of page 0 if NULL pointer detection feature is also enabled and then
compromise the functionality of it. The solution is skipping the NX
attributes setting to page 0 if NULL pointer detection feature is enabled.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This issue is a regression one caused by a patch at
425d25699b
That fix didn't take the 0 page to free into account, which still
needs to call UnsetGuardPage() even no memory needs to free.
The fix is just moving the calling of UnsetGuardPage() to the place
right after calling AdjustMemoryF().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
"Entry->Link.ForwardLink = NULL;" is present in RemoveMemoryMapEntry()
for DxeCore, that is correct.
"Entry->Link.ForwardLink = NULL;" is absent in RemoveOldEntry()
for PiSmmCore, that is incorrect.
Without this fix, when FromStack in Entry is TRUE,
the "InsertTailList (&mMapStack[mMapDepth].Link, &Entry->Link);" in
following calling to CoreFreeMemoryMapStack() will fail as the entry
at mMapStack[mMapDepth] actually has been removed from the list.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
This hole will cause page fault randomly. The root cause is that Guard
page, which is just freed back to page pool but not yet cleared not-
present attribute, will be allocated right away by internal function
CoreFreeMemoryMapStack(). The solution to this issue is to clear the
not-present attribute for freed Guard page before doing any free
operation, instead of after those operation.
The reason we didn't do this before is due to the fact that manipulating
page attributes might cause memory allocation action which would cause a
dead lock inside a memory allocation/free operation. So we always set or
unset Guard page outside the memory lock. After a thorough analysis, we
believe clearing a Guard page will not cause memory allocation because
memory we're to manipulate was already manipulated before for sure.
Therefore there should be no memory allocation occurring in this
situation.
Since we cleared Guard page not-present attribute before freeing instead
of after freeing, the debug code to clear freed memory can now be restored
to its original way (aka no checking and bypassing Guard page).
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>