REF: https://bugzilla.tianocore.org/show_bug.cgi?id=864
REF: CVE-2018-3630
To follow PI spec, ensure FfsFileHeader 8 bytes aligned.
For the integrity of FV(especially non-MemoryMapped FV) layout,
let CachedFv point to FV beginning, but not (FV + FV header).
And current code only handles (FwVolHeader->ExtHeaderOffset != 0) path,
update code to also handle (FwVolHeader->ExtHeaderOffset == 0) path.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Hao Wu <hao.a.wu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1524
When shadowing PeiCore the EFI_PEI_CORE_FV_LOCATION_PPI
should be checked to see if PeiCore not in BFV, otherwise
just shadowing PeiCore from BFV.
Test: Verified on internal platform and booting successfully.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Hao Wu <hao.a.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chasel Chiu <chasel.chiu@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Take MAX_ALLOC_ADDRESS into account in the implementation of the
page allocation routines, so that they will only return memory
that is addressable by the CPU at boot time, even if more memory
is available in the GCD memory map.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Update the GCD memory map initialization code so it disregards
memory that is not addressable by the CPU at boot time. This
only affects the first memory descriptor that is added, other
memory descriptors are permitted that describe memory ranges
that may be accessible to the CPU itself only when executing
under the OS.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1405
Background as below.
Problem:
As static configuration from the PCDs, the binary PeiCore (for example
in FSP binary with dispatch mode) could not predict how many FVs,
Files or PPIs for different platforms.
Burden:
Platform developers need configure the PCDs accordingly for different
platforms.
To solve the problem and remove the burden, we can update code to
remove the using of PcdPeiCoreMaxFvSupported, PcdPeiCoreMaxPeimPerFv
and PcdPeiCoreMaxPpiSupported by extending buffer dynamically for FV,
File and PPI management.
This patch removes the using of PcdPeiCoreMaxPpiSupported in PeiCore.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Hao Wu <hao.a.wu@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Nate DeSimone <nathaniel.l.desimone@intel.com>
Cc: Chasel Chiu <chasel.chiu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Chasel Chiu <chasel.chiu@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1405
Background as below.
Problem:
As static configuration from the PCDs, the binary PeiCore (for example
in FSP binary with dispatch mode) could not predict how many FVs,
Files or PPIs for different platforms.
Burden:
Platform developers need configure the PCDs accordingly for different
platforms.
To solve the problem and remove the burden, we can update PeiCore to
remove the using of PcdPeiCoreMaxFvSupported, PcdPeiCoreMaxPeimPerFv
and PcdPeiCoreMaxPpiSupported by extending buffer dynamically for FV,
File and PPI management.
This patch removes the using of PcdPeiCoreMaxFvSupported in PeiCore.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Hao Wu <hao.a.wu@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Nate DeSimone <nathaniel.l.desimone@intel.com>
Cc: Chasel Chiu <chasel.chiu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Chasel Chiu <chasel.chiu@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1405
Background as below.
Problem:
As static configuration from the PCDs, the binary PeiCore (for example
in FSP binary with dispatch mode) could not predict how many FVs,
Files or PPIs for different platforms.
Burden:
Platform developers need configure the PCDs accordingly for different
platforms.
To solve the problem and remove the burden, we can update code to
remove the using of PcdPeiCoreMaxFvSupported, PcdPeiCoreMaxPeimPerFv
and PcdPeiCoreMaxPpiSupported by extending buffer dynamically for FV,
File and PPI management.
This patch removes the using of PcdPeiCoreMaxPeimPerFv in PeiCore.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Hao Wu <hao.a.wu@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Nate DeSimone <nathaniel.l.desimone@intel.com>
Cc: Chasel Chiu <chasel.chiu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Chasel Chiu <chasel.chiu@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1295
This issue originates from following patch which allows to enable
paging if PcdImageProtectionPolicy and PcdDxeNxMemoryProtectionPolicy
(in addition to PcdSetNxForStack) are set to enable related features.
5267926134
Due to above change, PcdImageProtectionPolicy will be set to 0 by
default in many platforms, which, in turn, cause following code in
MdeModulePkg\Core\Dxe\Misc\MemoryProtection.c fail the creation of
notify event of CpuArchProtocol.
1138: if (mImageProtectionPolicy != 0 ||
PcdGet64 (PcdDxeNxMemoryProtectionPolicy) != 0) {
1139: Status = CoreCreateEvent (
...
1142: MemoryProtectionCpuArchProtocolNotify,
...
1145: );
Then following call flow won't be done and Guard pages will not be
set as not-present in SetAllGuardPages() eventually.
MemoryProtectionCpuArchProtocolNotify()
=> HeapGuardCpuArchProtocolNotify()
=> SetAllGuardPages()
The solution is removing the if(...) statement so that the notify
event will always be created and registered. This won't cause
unnecessary code execution because, in the notify event handler,
the related PCDs like
PcdImageProtectionPolicy and
PcdDxeNxMemoryProtectionPolicy
will be checked again before doing related jobs.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
At the end of of MemoryProtectionCpuArchProtocolNotify there's cleanup
code to free resource. But at line 978, 994, 1005 the function returns
directly. This patch use "goto" to replace "return" to make sure the
resource is freed before exit.
1029: CoreCloseEvent (Event);
1030: return;
There's another memory leak after calling gBS->LocateHandleBuffer() in
the same function:
Status = gBS->LocateHandleBuffer (
ByProtocol,
&gEfiLoadedImageProtocolGuid,
NULL,
&NoHandles,
&HandleBuffer
);
HandleBuffer is allocated in above call but never freed. This patch
will also add code to free it.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Today's PiSmmIpl implementation initially sets SMRAM to WB to speed
up the SMM core/modules loading before SMM CPU driver runs.
When SMM CPU driver runs, PiSmmIpl resets the SMRAM to UC. It's done
in SmmIplDxeDispatchEventNotify(). COMM_BUFFER_SMM_DISPATCH_RESTART
is returned from SMM core that SMM CPU driver is just dispatched.
Since now the SMRR is widely used to control the SMRAM cache setting.
It's not needed to reset the SMRAM to UC anymore.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1286
This issue is introduced by bb685071c2.
The *MemorySpaceMap assigned with NULL (line 1710) value might be
accessed (line 1726/1730) without any sanity check. Although it won't
happen in practice because of line 1722, we still need to add check
against NULL to make static code analyzer happy.
1710 *MemorySpaceMap = NULL;
.... ...
1722 if (DescriptorCount == *NumberOfDescriptors) {
.... ...
1726 Descriptor = *MemorySpaceMap;
.... ...
1730 BuildMemoryDescriptor (Descriptor, Entry);
Tests:
Pass build and boot to shell.
Cc: Hao Wu <hao.a.wu@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Hao Wu <hao.a.wu@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1277
The failure is caused by data type conversion between UINTN and UINT64,
which is checked in at 63ebde8ef6.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
which is illegal access to memory which has been freed. The principle
behind is similar to pool guard feature, that is we'll turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed.
This also implies that, once a page is allocated and freed, it cannot
be re-allocated. This will bring another issue, which is that there's
risk that memory space will be used out. To address it, the memory
service add logic to put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages which haven been freed.
This feature brings another problem is that memory map descriptors will
be increased enormously (200+ -> 2000+). One of change in this patch
is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
freed pages back into the memory map. Now the number can stay at around
510.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This issue is hidden in current code but exposed by introduction
of freed-memory guard feature due to the fact that the feature
will turn all pool allocation to page allocation.
The solution is moving the memory allocation in CoreGetMemorySpaceMap()
to be out of the GCD memory map lock.
CoreDumpGcdMemorySpaceMap()
=> CoreGetMemorySpaceMap()
=> CoreAcquireGcdMemoryLock () *
AllocatePool()
=> InternalAllocatePool()
=> CoreAllocatePool()
=> CoreAllocatePoolI()
=> CoreAllocatePoolPagesI()
=> CoreAllocatePoolPages()
=> FindFreePages()
=> PromoteMemoryResource()
=> CoreAcquireGcdMemoryLock() **
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
BZ#1116: https://bugzilla.tianocore.org/show_bug.cgi?id=1116
Currently IA32_EFER.NXE is only set against PcdSetNxForStack. This
confuses developers because following two other PCDs also need NXE
to be set, but actually not.
PcdDxeNxMemoryProtectionPolicy
PcdImageProtectionPolicy
This patch solves this issue by adding logic to enable IA32_EFER.NXE
if any of those PCDs have anything enabled.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Now that Itanium support has been dropped, we can remove the various
occurrences of the ELILO on Itanium PE/COFF header workaround.
Link: https://bugzilla.tianocore.org/show_bug.cgi?id=816
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
PEI Stack Guard needs to enable paging before DxeIpl. This might cause
#GP in the transition from 32-bit PEI to 64-bit DXE due to the code
trying to write CR3 register with PML4 page table while the processor
is enabled with PAE paging.
Simply disabling paging before updating CR3 can solve this conflict.
There's no such issue for 64-bit PEI so this change applies only to
32-bit code.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: "Ware, Ryan R" <ryan.r.ware@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
In V2, Remove GetImageReadFunction(), directly use PeiImageRead().
The copy PeiImageReadForShadow function doesn't improve the boot performance.
This patch removes this copy logic to simplify the code logic.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Removing rules for Ipf sources file:
* Remove the source file which path with "ipf" and also listed in
[Sources.IPF] section of INF file.
* Remove the source file which listed in [Components.IPF] section
of DSC file and not listed in any other [Components] section.
* Remove the embedded Ipf code for MDE_CPU_IPF.
Removing rules for Inf file:
* Remove IPF from VALID_ARCHITECTURES comments.
* Remove DXE_SAL_DRIVER from LIBRARY_CLASS in [Defines] section.
* Remove the INF which only listed in [Components.IPF] section in DSC.
* Remove statements from [BuildOptions] that provide IPF specific flags.
* Remove any IPF sepcific sections.
Removing rules for Dec file:
* Remove [Includes.IPF] section from Dec.
Removing rules for Dsc file:
* Remove IPF from SUPPORTED_ARCHITECTURES in [Defines] section of DSC.
* Remove any IPF specific sections.
* Remove statements from [BuildOptions] that provide IPF specific flags.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chen A Chen <chen.a.chen@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
fwvol.c(1572) : warning C4701: potentially uninitialized
local variable 'Status' used
The build failure is caused by
0e042d0ad7 for
https://bugzilla.tianocore.org/show_bug.cgi?id=1131
This patch initializes Status to fix the build failure.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1131
PI spec and BaseTools support to generate multiple FV images
in one FV file.
This patch is to update DxeCore to handle the case.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1131
PI spec and BaseTools support to generate multiple FV images
in one FV file.
This patch is to update PeiCore to handle the case.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Call BS.AllocatePages in DXE driver and call SMM FreePages with the address of the buffer allocated in the DXE driver. SMM FreePages success and add a non-SMRAM range into SMM heap list. This is not an expected behavior. SMM FreePages should return error for this case and not free the pages.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1098
Change-Id: Ie5ffa1ac62c558aa418a8a3d7d0e8158b846e13b
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
The functions that are never called have been removed.
They are IsImageInsideSmram,FindImageRecord,SmmRemoveImageRecord,
SmmMemoryAttributesTableConsistencyCheck,DumpSmmMemoryMapEntry,
SmmMemoryMapConsistencyCheckRange,SmmMemoryMapConsistencyCheck,
DumpSmmMemoryMap,ClearGuardMapBit,SetGuardMapBit,AdjustMemoryA,
AdjustMemoryS,IsHeadGuard and IsTailGuard.
FindImageRecord() is called by SmmRemoveImageRecord(); however,
nothing calls SmmRemoveImageRecord().
SmmMemoryMapConsistencyCheckRange() is called by
SmmMemoryMapConsistencyCheck(); however, nothing calls
SmmMemoryMapConsistencyCheck().
https://bugzilla.tianocore.org/show_bug.cgi?id=1062
v2:append the following to the commit message.
- FindImageRecord() is called by SmmRemoveImageRecord(); however,
nothing calls SmmRemoveImageRecord().
- SmmMemoryMapConsistencyCheckRange() is called by
SmmMemoryMapConsistencyCheck(); however, nothing calls
SmmMemoryMapConsistencyCheck().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: shenglei <shenglei.zhang@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
The functions that are never called have been removed.
They are ClearGuardMapBit,SetGuardMapBit,IsHeadGuard,
IsTailGuard and CoreEfiNotAvailableYetArg0.
https://bugzilla.tianocore.org/show_bug.cgi?id=1062
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: shenglei <shenglei.zhang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
We want to provide precise info in MemAttribTable
to both OS and SMM, and SMM only gets the info at EndOfDxe.
So we do not update RtCode entry in EndOfDxe.
The impact is that if 3rd part OPROM is runtime, it cannot be executed
at UEFI runtime phase.
Currently, we do not see compatibility issue, because the only runtime
OPROM we found before in UNDI, and UEFI OS will not use UNDI interface
in OS.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
So that the SMM can consume it to set page protection for
the UEFI runtime page with EFI_MEMORY_RO attribute.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Add an example case for the usage of
PERF_EVENT_SIGNAL_BEGIN/PERF_EVENT_SIGNAL_END
Cc: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Current code assumes PpiDescriptor and Ppi are in same range
(heap/stack/hole).
This patch removes the assumption.
Descriptor needs to be converted first. It is also handled by this patch.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Qing Huang <qing.huang@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The perf measurement entry in SmmEntryPoint function
doesn't have significant meaning. So remove it now.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
1. Do not use tab characters
2. No trailing white space in one line
3. All files must end with CRLF
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Replace old Perf macros with the new added ones.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Dandan Bi <dandan.bi@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
CpuDxe driver is updated to be able to access DXE page table in SMM mode,
which means Heap Guard can get correct memory paging attributes in what
environment. It's not necessary to exclude SMM from detecting Heap Guard
feature support.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
NASM has replaced ASM and S files.
1. Remove ASM from all modules.
2. Remove S files from the drivers only.
3. https://bugzilla.tianocore.org/show_bug.cgi?id=881
After NASM is updated, S files can be removed from Library.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Until now the possible errors returned from processing
boot firmware volume were not checked, which could cause
misbehavior in further booting stages. Add relevant assert.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Jan Dabros <jsd@semihalf.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
The PEIM in all cached FV image may be in registered for shadow status.
Current logic CurrentPeimFvCount is not enough.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This patch fixes an issue introduced by commit
5b91bf82c6
and
0c9f2cb10b
This issue will only happen if PcdDxeNxMemoryProtectionPolicy is
enabled for reserved memory, which will mark SMM RAM as NX (non-
executable) during DXE core initialization. SMM IPL driver will
unset the NX attribute for SMM RAM to allow loading and running
SMM core/drivers.
But above commit will fail the unset operation of the NX attribute
due to a fact that SMM RAM has zero cache attribute (MRC code always
sets 0 attribute to reserved memory), which will cause GCD internal
method ConverToCpuArchAttributes() to return 0 attribute, which is
taken as invalid CPU paging attribute and skip the calling of
gCpu->SetMemoryAttributes().
The solution is to make use of existing functionality in PiSmmIpl
to make sure one cache attribute is set for SMM RAM. For performance
consideration, PiSmmIpl will always try to set SMM RAM to write-back.
But there's a hole in the code which will fail the setting write-back
attribute because of no corresponding cache capabilities. This patch
will add necessary cache capabilities before setting corresponding
attributes.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Heap Guard feature needs enough memory and paging to work. Otherwise
calling SetMemoryAttributes to change page attribute will fail. This
patch add necessary check of result of calling SetMemoryAttributes.
This can help users to debug their problem in enabling this feature.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Heap Guard feature needs enough memory and paging to work. Otherwise
calling SetMemoryAttributes to change page attribute will fail. This
patch add necessary check of result of calling SetMemoryAttributes.
This can help users to debug their problem in enabling this feature.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
It is caused by 0c9f2cb10b
and false positive.
Initialize CpuArchAttributes to suppress incorrect
compiler/analyzer warnings.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>
This patch fixes an issue with VlvTbltDevicePkg introduced
by commit 5b91bf82c6.
The history is as below.
To support heap guard feature, 14dde9e903
added support for SetMemorySpaceAttributes() to handle page attributes,
but after that, a combination of CPU arch attributes and other attributes
was not allowed anymore, for example, UC + RUNTIME. It is a regression.
Then 5b91bf82c6 was to fix the regression,
and we thought 0 CPU arch attributes may be used to clear CPU arch
attributes, so 0 CPU arch attributes was allowed to be sent to
gCpu->SetMemoryAttributes().
But some implementation of CPU driver may return error for 0 CPU arch
attributes. That fails the case that caller just calls
SetMemorySpaceAttributes() with none CPU arch attributes (for example,
RUNTIME), and the purpose of the case is not to clear CPU arch attributes.
This patch filters the call to gCpu->SetMemoryAttributes()
if the requested attributes is 0. It also removes the #define
INVALID_CPU_ARCH_ATTRIBUTES that is no longer used.
Cc: Heyi Guo <heyi.guo@linaro.org>
Cc: Yi Li <phoenix.liyi@huawei.com>
Cc: Renhao Liang <liangrenhao@huawei.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
For gDS->SetMemorySpaceAttributes(), when user passes a combined
memory attribute including CPU arch attribute and other attributes,
like EFI_MEMORY_RUNTIME, ConverToCpuArchAttributes() will return
INVALID_CPU_ARCH_ATTRIBUTES and skip setting page/cache attribute for
the specified memory space.
We don't see any reason to forbid combining CPU arch attributes and
non-CPU-arch attributes when calling gDS->SetMemorySpaceAttributes(),
so we remove the check code in ConverToCpuArchAttributes(); the
remaining code is enough to grab the interested bits for
Cpu->SetMemoryAttributes().
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Signed-off-by: Yi Li <phoenix.liyi@huawei.com>
Signed-off-by: Renhao Liang <liangrenhao@huawei.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Within function CoreExitBootServices(), this commit will move the call
of:
MemoryProtectionExitBootServicesCallback();
before:
SaveAndSetDebugTimerInterrupt (FALSE);
and
gCpu->DisableInterrupt (gCpu);
The reason is that, within MemoryProtectionExitBootServicesCallback(),
APIs like RaiseTpl and RestoreTpl maybe called. An example will be:
DebugLib (using PeiDxeDebugLibReportStatusCode instance)
|
v
ReportStatusCodeLib (using DxeReportStatusCodeLib instance)
|
v
Raise/RestoreTpl
The call of Raise/RestoreTpl APIs will re-enable BSP interrupts. Hence,
this commit refine the calling sequence to ensure BSP interrupts before
leaving CoreExitBootServices().
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
SMM core will add a HEADER before each allocated pool memory and clean
up this header once it's freed. If a block of allocated pool is marked
as read-only after allocation (EfiRuntimeServicesCode type of pool in
SMM will always be marked as read-only), #PF exception will be triggered
during memory pool freeing.
Normally EfiRuntimeServicesCode type of pool should not be freed in the
real world. But some test suites will actually do memory free for all
types of memory for the purpose of functionality and conformance test.
So this issue should be fixed anyway.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
If given address is on 64K boundary and the requested bit number is 64,
all SetBits(), ClearBits() and GetBits() will encounter ASSERT problem
in trying to do a 64 bits of shift, which is not allowed by LShift() and
RShift(). This patch tries to fix this issue by turning bits operation
into whole integer operation in such situation.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
If given address is on 64K boundary and the requested bit number is 64,
all SetBits(), ClearBits() and GetBits() will encounter ASSERT problem
in trying to do a 64 bits of shift, which is not allowed by LShift() and
RShift(). This patch tries to fix this issue by turning bits operation
into whole integer operation in such situation.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Due to the fact that HeapGuard needs CpuArchProtocol to update page
attributes, the feature is normally enabled after CpuArchProtocol is
installed. Since there're some drivers are loaded before CpuArchProtocl,
they cannot make use HeapGuard feature to detect potential issues.
This patch fixes above situation by updating the DXE core to skip the
NULL check against global gCpu in the IsMemoryTypeToGuard(), and adding
NULL check against gCpu in SetGuardPage() and UnsetGuardPage() to make
sure that they can be called but do nothing. This will allow HeapGuard to
record all guarded memory without setting the related Guard pages to not-
present.
Once the CpuArchProtocol is installed, a protocol notify will be called
to complete the work of setting Guard pages to not-present.
Please note that above changes will cause a #PF in GCD code during cleanup
of map entries, which is initiated by CpuDxe driver to update real mtrr
and paging attributes back to GCD. During that time, CpuDxe doesn't allow
GCD to update memory attributes and then any Guard page cannot be unset.
As a result, this will prevent Guarded memory from freeing during memory
map cleanup.
The solution is to avoid allocating guarded memory as memory map entries
in GCD code. It's done by setting global mOnGuarding to TRUE before memory
allocation and setting it back to FALSE afterwards in GCD function
CoreAllocateGcdMapEntry().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This patch fixes the same issues in Heap Guard in DXE core, which is fixed
in another patch.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
There're two ASSERT issues which will be triggered by boot loader of
Windows 10.
The first is caused by allocating memory in heap guard during another
memory allocation, which is not allowed in DXE core. Avoiding reentry
of memory allocation has been considered in heap guard feature. But
there's a hole in the code of function FindGuardedMemoryMap(). The fix
is adding AllocMapUnit parameter in the condition of while(), which
will prevent memory allocation from happenning during Guard page
check operation.
The second is caused by the core trying to allocate page 0 with Guard
page, which will cause the start address rolling back to the end of
supported system address. According to the requirement of heap guard,
the fix is just simply skipping the free memory at page 0 and let
the core continue searching free memory after it.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The root cause is an unnecessary check to Size parameter in function
AdjustMemoryS(). It will cause one standalone free page (happen to have
Guard page around) in the free memory list cannot be allocated, even if
the requested memory size is less than a page.
//
// At least one more page needed for Guard page.
//
if (Size < (SizeRequested + EFI_PAGES_TO_SIZE (1))) {
return 0;
}
The following code in the same function actually covers above check
implicitly. So the fix is simply removing above check.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
If enabled, NX memory protection feature will mark some types of active
memory as NX (non-executable), which includes the first page of the stack.
This will overwrite the attributes of the first page of the stack if the
stack guard feature is also enabled.
The solution is to override the attributes setting to the first page of
the stack by adding back the 'EFI_MEMORY_RP' attribute when the stack
guard feature is enabled.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
The commit rewrites the logic in function
InitializeDxeNxMemoryProtectionPolicy() for handling the first page
(page 0) when NULL pointer detection feature is enabled.
Instead of skip setting the page 0, the codes will now override the
attribute setting of page 0 by adding the 'EFI_MEMORY_RP' attribute.
The purpose is to make it easy for other special handling of pages
(e.g. the first page of the stack when stack guard feature is enabled).
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
If PEIM image address doesn't meet with its section alignment, it will
load fail. PeiCore adds more debug message to report it.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Update PEI Service ResetSystem() to always attempt to use
the Reset2 PPI before looking for the Reset PPI.
Cc: Liming Gao <liming.gao@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
The Heap Guard feature wrapped SmmInternalFreePagesEx with
SmmInternalFreePagesExWithGuard but didn't add necessary
parameter check. This patch fixes this situation.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
SmiHandlerUnRegister() validates the DispatchHandle by checking
whether the first 32bit matches to a certain signature
(SMI_HANDLER_SIGNATURE).
But if a caller calls *UnRegister() twice and the memory freed by
first call still contains the signature, the second call may hang.
The patch fixes this issue by locating the DispatchHandle
in all SMI handlers, instead of checking the signature.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Considering following scenario (both NX memory protection and heap guard
are enabled):
1. Allocate 3 pages. The attributes of adjacent memory pages will be
|NOT-PRESENT| present | present | present |NOT-PRESENT|
2. Free the middle page. The attributes of adjacent memory pages should be
|NOT-PRESENT| present |NOT-PRESENT| present |NOT-PRESENT|
But the NX feature will overwrite the attributes of middle page. So it
looks still like below, which is wrong.
|NOT-PRESENT| present | PRESENT | present |NOT-PRESENT|
The solution is checking the first and/or last page of a memory block to be
marked as NX, and skipping them if they are Guard pages.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
If enabled, NX memory protection feature will mark all free memory as
NX (non-executable), including page 0. This will overwrite the attributes
of page 0 if NULL pointer detection feature is also enabled and then
compromise the functionality of it. The solution is skipping the NX
attributes setting to page 0 if NULL pointer detection feature is enabled.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This issue is a regression one caused by a patch at
425d25699b
That fix didn't take the 0 page to free into account, which still
needs to call UnsetGuardPage() even no memory needs to free.
The fix is just moving the calling of UnsetGuardPage() to the place
right after calling AdjustMemoryF().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
"Entry->Link.ForwardLink = NULL;" is present in RemoveMemoryMapEntry()
for DxeCore, that is correct.
"Entry->Link.ForwardLink = NULL;" is absent in RemoveOldEntry()
for PiSmmCore, that is incorrect.
Without this fix, when FromStack in Entry is TRUE,
the "InsertTailList (&mMapStack[mMapDepth].Link, &Entry->Link);" in
following calling to CoreFreeMemoryMapStack() will fail as the entry
at mMapStack[mMapDepth] actually has been removed from the list.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
This hole will cause page fault randomly. The root cause is that Guard
page, which is just freed back to page pool but not yet cleared not-
present attribute, will be allocated right away by internal function
CoreFreeMemoryMapStack(). The solution to this issue is to clear the
not-present attribute for freed Guard page before doing any free
operation, instead of after those operation.
The reason we didn't do this before is due to the fact that manipulating
page attributes might cause memory allocation action which would cause a
dead lock inside a memory allocation/free operation. So we always set or
unset Guard page outside the memory lock. After a thorough analysis, we
believe clearing a Guard page will not cause memory allocation because
memory we're to manipulate was already manipulated before for sure.
Therefore there should be no memory allocation occurring in this
situation.
Since we cleared Guard page not-present attribute before freeing instead
of after freeing, the debug code to clear freed memory can now be restored
to its original way (aka no checking and bypassing Guard page).
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Section data alignment should be made in the build generation.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Section data alignment should be made in the build generation.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory
of EfiReservedMemoryType, the BIOS will hang at a page fault exception
during starting SMM driver.
The root cause is that SMM RAM is type of EfiReservedMemoryType and
marked as non-executable. The fix is simply removing NX attribute for
those memory.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
BfvHeader->FvLength is UINT64. Now, it prints with %x. It will cause the
late FvHandle to be as zero. So, its type is converted to UINT32.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Three issues addressed here:
a. Make NX memory protection and heap guard to be compatible
The solution is to check PcdDxeNxMemoryProtectionPolicy in Heap Guard to
see if the free memory should be set to NX, and set the Guard page to NX
before it's freed back to memory pool. This can solve the issue which NX
setting would be overwritten by Heap Guard feature in certain
configuration.
b. Returned pool address was not 8-byte aligned sometimes
This happened only when BIT7 is not set in PcdHeapGuardPropertyMask. Since
8-byte alignment is UEFI spec required, letting allocated pool adjacent to
tail guard page cannot be guaranteed.
c. NULL address handling due to allocation failure
When allocation failure, normally a NULL will be returned. But Heap Guard
code will still try to adjust the starting address of it, which will cause
a non-NULL pointer returned.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The root cause is that mImagePropertiesPrivateData.CodeSegmentCountMax was
not updated with correct value due to the fact that SortImageRecord() called
before might change the content of current ImageRecord. This will in turn
cause incorrect memory map entries generated in SplitTable().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The root cause of this issue is that, during splitting page table, the page
size should be the value of next level (smaller one) instead of current level.
The wrong page size will then cause wrong page table introduced, which will
break the normal boot.
Validation works include booting to Windows 10 and Fedora 26 on real Intel
platform and OVMF emulated platform in addition to manual checks on page
table with JTAG tool.
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
As some implementation of SMM Child Dispatcher (including SxDispatch)
may deny the handler registration after SmmReadyToLock, using
SxDispatch in SmmReadyToBootHandler() will be too late.
This patch updates code to use SxDispatch in SmmEndOfDxeHandler()
instead of SmmReadyToBootHandler().
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
One issue is that macros defined in HeapGuard.h
GUARD_HEAP_TYPE_PAGE
GUARD_HEAP_TYPE_POOL
doesn't match the definition of PCD PcdHeapGuardPropertyMask in
MdeModulePkg.dec. This patch fixed it by exchanging the BIT0 and BIT1
of them.
Another is that method AdjustMemoryF() will return a bigger NumberOfPages than
the value passed in. This is caused by counting twice of a shared Guard page
which can be used for both head and tail Guard of the memory before it and
after it. This happens only when partially freeing just one page in the middle
of a bunch of allocated pages. The freed page should be turned into a new
Guard page.
Cc: Jie Lin <jie.lin@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
One issue is that macros defined in HeapGuard.h
GUARD_HEAP_TYPE_PAGE
GUARD_HEAP_TYPE_POOL
doesn't match the definition of PCD PcdHeapGuardPropertyMask in
MdeModulePkg.dec. This patch fixed it by exchanging the BIT0 and BIT1
of them.
Another is that method AdjustMemoryF() will return a bigger NumberOfPages than
the value passed in. This is caused by counting twice of a shared Guard page
which can be used for both head and tail Guard of the memory before it and
after it. This happens only when partially freeing just one page in the middle
of a bunch of allocated pages. The freed page should be turned into a new
Guard page.
Cc: Jie Lin <jie.lin@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
> MdeModulePkg/Core/PiSmmCore/PiSmmCore.c: In function
> 'SmmReadyToBootHandler':
> MdeModulePkg/Core/PiSmmCore/PiSmmCore.c:323:14: error: passing argument
> 3 of 'SmmLocateProtocol' from incompatible pointer type [-Werror]
> );
> ^
> In file included from MdeModulePkg/Core/PiSmmCore/PiSmmCore.c:15:0:
> MdeModulePkg/Core/PiSmmCore/PiSmmCore.h:586:1: note: expected 'void **'
> but argument is of type 'struct EFI_SMM_SX_DISPATCH2_PROTOCOL **'
> SmmLocateProtocol (
> ^
> cc1: all warnings being treated as errors
Cc: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Fixes: 7b9b55b2ef
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Yao Jiewen <jiewen.yao@intel.com>
Otherwise, LegacyBoot may be triggered wrongly by other code in UEFI OS,
or vice versa.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Otherwise, it may be triggered wrongly by other code in OS.
This patch is to use S3 entry callback to determine if it will be
during S3 resume, and check it in SmmReadyToBootHandler().
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Rename SmmEndOfS3ResumeProtocolGuid to EndOfS3ResumeGuid as the GUID
may be used to install PPI in future to notify PEI phase code.
The references in UefiCpuPkg are also being updated.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
It is missing to update the prototype of SmmCommunicationCommunicate()
in d1632f694b.
This patch is to add it.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>
This patch will set the memory pages used for page table as read-only
memory after the paging is setup. CR0.WP must set to let it take into
effect.
A simple page table memory management mechanism, page table pool concept,
is introduced to simplify the page table memory allocation and protection.
It will also help to reduce the potential recursive "split" action during
updating memory paging attributes.
The basic idea is to allocate a bunch of continuous pages of memory in
advance as one or more page table pools, and all future page tables
consumption will happen in those pool instead of system memory. If the page
pool is reserved at the boundary of 2MB page and with same size of 2MB page,
there's no page granularity "split" operation will be needed, because the
memory of new page tables (if needed) will be usually in the same page as
target page table you're working on.
And since we have centralized page tables (a few 2MB pages), it's easier
to protect them by changing their attributes to be read-only once and for
all. There's no need to apply the protection for new page tables any more
as long as the pool has free pages available.
Once current page table pool has been used up, one can allocate another 2MB
memory pool and just set this new 2MB memory block to be read-only instead of
setting the new page tables one page by one page.
Two new PCDs PcdPageTablePoolUnitSize and PcdPageTablePoolAlignment are used
to specify the size and alignment for page table pool. For IA32 processor
0x200000 (2MB) is the only choice for both of them to meet the requirement of
page table pool.
Laszlo (lersek@redhat.com) did a regression test on QEMU virtual platform with
one middle version of this series patch. The details can be found at
https://lists.01.org/pipermail/edk2-devel/2017-December/018625.html
There're a few changes after his work.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Move ClearFirst4KPage/IsNullDetectionEnabled definition from DxeIpl.h to
VirtualMemory.h as they are implemented in VirtualMemory.c and only used
in IA32/X64 ARCH.
Cc: Jian J Wang <jian.j.wang@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Stack guard feature makes use of paging mechanism to monitor if there's a
stack overflow occurred during boot.
This patch will check setting of PCD PcdCpuStackGuard. If it's TRUE, DxeIpl
will setup page table and set the page at which the stack base locates to be
NOT PRESENT. If stack is used up and memory access cross into the last page
of it, #PF exception will be triggered.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Jiewen.yao@intel.com
Original API InitializeCpuExceptionHandlers is used in DxeMain to initialize
exception handlers but it does not support setting up stack switch required
by Stack Guard feature. Using the new API instead to make sure Stack Guard
feature is applicable to most part of code.
Since this API is called before memory service initialization, there's no
way to call AllocateXxx API to reserve memory. Global variables are used
for this special case. GDT table is reserved at least 2KB which should be
big enough for all current use cases.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Jiewen.yao@intel.com
Handle CommSize OPTIONAL case for SmmCommunicate.
And return EFI_ACCESS_DENIED when CommunicationBuffer
is not valid for SMM to access.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
In commit 7eb927db3e ("MdeModulePkg/DxeCore: implement memory protection
policy", 2017-02-24), we added two informative messages with the
InitializeDxeNxMemoryProtectionPolicy() function:
> InitializeDxeNxMemoryProtectionPolicy: applying strict permissions to
> active memory regions
and
> InitializeDxeNxMemoryProtectionPolicy: applying strict permissions to
> inactive memory regions
The messages don't report errors or warnings, thus downgrade their log
masks from DEBUG_ERROR to DEBUG_INFO.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1520485
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Star Zeng <star.zeng@intel.com>
It fixes the warning for loop has empty body [-Werror,-Wempty-body].
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liang Vincent <vincent.liang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
The USED_SIZE FV_EXT_TYPE is introduced by PI 1.6 spec.
The EFI_FIRMWARE_VOLUME_EXT_ENTRY_USED_SIZE_TYPE can be used to find
out how many EFI_FVB2_ERASE_POLARITY bytes are at the end of the FV.
When the FV gets shadowed into memory you only need to copy the used
bytes into memory and fill the rest of the memory buffer with the
erase value.
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
No need to allocate aligned buffer if FvImage has been
at required alignment.
Then the code logic will be aligned with ProcessFvFile() in
MdeModulePkg/Core/Pei/FwVol/FwVol.c.
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The USED_SIZE FV_EXT_TYPE is introduced by PI 1.6 spec.
The EFI_FIRMWARE_VOLUME_EXT_ENTRY_USED_SIZE_TYPE can be used to find
out how many EFI_FVB2_ERASE_POLARITY bytes are at the end of the FV.
When the FV gets shadowed into memory you only need to copy the used
bytes into memory and fill the rest of the memory buffer with the
erase value.
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Once the paging capabilities were filtered out, there might be some adjacent entries
sharing the same capabilities. It's recommended to merge those entries for the OS
compatibility purpose.
This patch makes use of existing method MergeMemoryMap() to do it. This is done by
simply turning this method from static to extern, and call it after filter code.
This patch is related to an issue described at
https://bugzilla.tianocore.org/show_bug.cgi?id=753
This patch is also passed test of booting follow OSs:
Windows 10
Windows Server 2016
Fedora 26
Fedora 25
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Some OSs will treat EFI_MEMORY_DESCRIPTOR.Attribute as really
set attributes and change memory paging attribute accordingly.
But current EFI_MEMORY_DESCRIPTOR.Attribute is assigned by
value from Capabilities in GCD memory map. This might cause
boot problems. Clearing all paging related capabilities can
workaround it. The code added in this patch is supposed to
be removed once the usage of EFI_MEMORY_DESCRIPTOR.Attribute
is clarified in UEFI spec and adopted by both EDK-II Core and
all supported OSs.
Laszlo did a thorough test on OVMF emulated platform. The details
can be found at
https://bugzilla.tianocore.org/show_bug.cgi?id=753#c10
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
In the method DumpGuardedMemoryBitmap() and SetAllGuardPages(), the code
didn't check if the global mMapLevel is legal value or not, which leaves
a logic hole causing potential array overflow in code followed.
This patch adds sanity check before any array reference in those methods.
Cc: Wu Hao <hao.a.wu@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Wu Hao <hao.a.wu@intel.com>
The coding style requires that header files must be also added in module's inf
file, as long as they're included by c files. This patch will fix this issue.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>