BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3223
In the current design, memory protection is not available till CpuDxe
is loaded. To resolve this, introduce CpuArchLib to move the
CPU Architectural initialization to DxeCore.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Vitaly Cheptsov <vit9696@protonmail.com>
Signed-off-by: Marvin Häuser <mhaeuser@posteo.de>
EFI_RESOURCE_MEMORY_UNACCEPTED has been officially defined in the PI
1.8 specification. So all temporary solutions have been replaced with
the actual definition.
Cc: Felix Polyudov <felixp@ami.com>
Cc: Dhanaraj V <vdhanaraj@ami.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Sachin Ganesh <sachinganesh@ami.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
This patch fix a use-after-free issue where unregistering an
SMI handler could lead to the deletion of the SMI_HANDLER while it is
still in use by SmiManage(). The fix involves modifying
SmiHandlerUnRegister() to detect whether it is being called from
within the SmiManage() stack. If so, the removal of the SMI_HANDLER
is deferred until SmiManage() has finished executing.
Additionally, due to the possibility of recursive SmiManage() calls,
the unregistration and subsequent removal of the SMI_HANDLER are
ensured to occur only after the outermost SmiManage() invocation has
completed.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
The functionality to create and delete Image Records has been
consolidated in a library and ensured that MemoryProtection.c's
usage is encapsulated there.
This patch moves MemoryProtection.c to reuse the code in the lib
and to prevent issues in the future where code is updated in one
place but not the other.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Taylor Beebe <taylor.d.beebe@gmail.com>
Acked-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Currently, there are multiple issues when page or pool guards are
allocated for runtime memory regions that are aligned to
non-EFI_PAGE_SIZE alignments. Multiple other issues have been fixed for
these same systems (notably ARM64 which has a 64k runtime page
allocation granularity) recently. The heap guard system is only built to
support 4k guard pages and 4k alignment.
Today, the address returned to a caller of AllocatePages will not be
aligned correctly to the runtime page allocation granularity, because
the heap guard system does not take non-4k alignment requirements into
consideration.
However, even with this bug fixed, the Memory Allocation Table cannot be
produced and an OS with a larger than 4k page granularity will not have
aligned memory regions because the guard pages are reported as part of
the same memory allocation. So what would have been, on an ARM64 system,
a 64k runtime memory allocation is actually a 72k memory allocation as
tracked by the Page.c code because the guard pages are tracked as part
of the same allocation. This is a core function of the current heap
guard architecture.
This could also be fixed with rearchitecting the heap guard system to
respect alignment requirements and shift the guard pages inside of the
outer rounded allocation or by having guard pages be the runtime
granularity. Both of these approaches have issues. In the former case,
we break UEFI spec 2.10 section 2.3.6 for AARCH64, which states that
each 64k page for runtime memory regions may not have mixed memory
attributes, which pushing the guard pages inside would create. In the
latter case, an immense amount of memory is wasted to support such large
guard pages, and with pool guard many systems could not support an
additional 128k allocation for all runtime memory.
The simpler and safer solution is to disallow page and pool guards for
runtime memory allocations for systems that have a runtime granularity
greater than the EFI_PAGE_SIZE (4k). The usefulness of such guards is
limited, as OSes do not map guard pages today, so there is only boot
time protection of these ranges. This also prevents other bugs from
being exposed by using guards for regions that have a non-4k alignment
requirement, as again, multiple have cropped up because the heap guard
system was not built to support it.
This patch adds both a static assert to ensure that either the runtime
granularity is the EFI_PAGE_SIZE or that the PCD bits are not set to
enable heap guard for runtime memory regions. It also adds a check in
the page and pool allocation system to ensure that at runtime we are not
allocating a runtime region and attempt to guard it (the PCDs are close
to being removed in favor of dynamic heap guard configurations).
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4674
Github PR: https://github.com/tianocore/edk2/pull/5382
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Per the UEFI spec 2.10, section 2.3.6 (for the AARCH64 arch, other
architectures in section two confirm the same) the memory types that
need runtime page allocation granularity are EfiReservedMemoryType,
EfiACPIMemoryNVS, EfiRuntimeServicesCode, and EfiRuntimeServicesData.
However, legacy code was setting runtime page allocation granularity for
EfiACPIReclaimMemory and not EfiReservedMemoryType. This patch fixes
that error.
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Suggested-by: Ard Biesheuvel <ardb+tianocore@kernel.org>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
CodeQL flags the Free Pages logic for not ensuring that
Entry is non-null before using it. Add a check for this
and appropriately bail out if we hit this case.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4697
EvacuateTempRam function will copy the temporary memory context to the rebased
pages and the raw pages. Migrations of rebased PEIMs is from cache to memory,
while raw PEIMs is from memory to memory. So the migrations of raw PEIMs
is slower than rebased PEIMs. Experimental data indicates that changing the source
address of raw PEIMs migration will improve performance by 35%.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Zhihao Li <zhihao.li@intel.com>
Message-Id: <20240301071147.519-1-zhihao.li@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Michael Kubacki <michael.kubacki@microsoft.com>
In last patch, we add code support to unregister SMI handler inside
itself. However, the code doesn't support unregister SMI handler
insider other SMI handler. While this is not a must-have usage.
So add check to disallow unregister SMI handler in other SMI handler.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Message-Id: <20240301030133.628-3-zhiguang.liu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
To support unregister SMI handler inside SMI handler itself,
get next node before SMI handler is executed, since LIST_ENTRY that
Link points to may be freed if unregister SMI handler in SMI handler
itself.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
Message-Id: <20240301030133.628-2-zhiguang.liu@intel.com>
Rename Page5LevelSupported to Page5LevelEnabled.
The variable is set to true in case 5-paging level is enabled (64-bit
PEI) or will be enabled (32-bit PEI), it does *not* tell whenever the
5-level paging is supported by the CPU.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Message-Id: <20240222105407.75735-3-kraxel@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Oliver Steffen <osteffen@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
[lersek@redhat.com: turn the "Cc:" message headers from Gerd's on-list
posting into "Cc:" tags in the commit message, in order to pacify
"PatchCheck.py"]
PcdUse5LevelPageTable documentation says:
Indicates if 5-Level Paging will be enabled in long mode. 5-Level
Paging will not be enabled when the PCD is TRUE but CPU doesn't support
5-Level Paging.
So running in 4-level paging mode with PcdUse5LevelPageTable=TRUE is
possible. The only invalid combination is 5-level paging being active
with PcdUse5LevelPageTable=FALSE.
Fix the ASSERT accordingly.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Message-Id: <20240222105407.75735-2-kraxel@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Oliver Steffen <osteffen@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
[lersek@redhat.com: turn the "Cc:" message headers from Gerd's on-list
posting into "Cc:" tags in the commit message, in order to pacify
"PatchCheck.py"]
RuntimeDxe is used to back the runtime services time functions,
so align the description of the function return values with the
defined values for these services as described in UEFI Spec 2.10.
REF: UEFI spec 2.10 section 8 Services ? Runtime Services
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Suqiang Ren <suqiangx.ren@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
CoreConnectSingleController() searches for the Driver Family Override
Protocol drivers by looping and checking each Driver Binding Handles.
This loop can be skipped by checking if any Driver Family Override
Protocol installed in the platform first, to improve the performance.
Cc: Ray Ni <ray.ni@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Zhi Jin <zhi.jin@intel.com>
CoreGetProtocolInterface() is called by CoreOpenProtocol(),
CoreCloseProtocol() and CoreOpenProtocolInformation().
Before CoreOpenProtocol() calls CoreGetProtocolInterface(), the input
parameter UserHandle has been already checked for validation. So does
CoreCloseProtocol().
Removing the handle validation check in CoreGetProtocolInterface()
could improve the performance, as CoreOpenProtocol() is called very
frequently.
To ensure the assumption that the caller of CoreGetProtocolInterface()
must pass in a valid UserHandle that is checked with CoreValidateHandle(),
add the parameter check in CoreOpenProtocolInformation(), and declare
CoreGetProtocolInterface() as static.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Zhi Jin <zhi.jin@intel.com>
Provide an optional method for PEI to declare a specific address
range to use for the Memory Type Information bins. The current
algorithm uses heuristics that tends to place the Memory Type
Information bins in the same location, but memory configuration
changes across boots or algorithm changes across a firmware
updates could potentially change the Memory Type Information bin
location. If the bin locations move across an S4 save/resume
cycle, then the S4 resume may fail. Enabling this feature
increases the number of scenarios that an S4 resume operation
may succeed.
If the HOB List contains a Resource Descriptor HOB that
describes tested system memory and has an Owner GUID of
gEfiMemoryTypeInformationGuid, then use the address range
described by the Resource Descriptor HOB as the preferred
location of the Memory Type Information bins. If this HOB is
not detected, then the current behavior is preserved.
The HOB with an Owner GUID of gEfiMemoryTypeInformationGuid
is ignored for the following conditions:
* The HOB with an Owner GUID of gEfiMemoryTypeInformationGuid
is smaller than the Memory Type Information bins.
* The HOB list contains more than one Resource Descriptor HOB
with an owner GUID of gEfiMemoryTypeInformationGuid.
* The Resource Descriptor HOB with an Owner GUID of
gEfiMemoryTypeInformationGuid is the same Resource Descriptor
HOB that that describes the PHIT memory range.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Aaron Li <aaron.li@intel.com>
Cc: Liu Yun <yun.y.liu@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Update the DxeMain initialization order to initialize GCD
services before any runtime allocations are performed. This
is required to prevent runtime data fragmentation when the
UEFI System Table and UEFI Runtime Service Table are allocated
before both the memory and GCD services are initialized.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Aaron Li <aaron.li@intel.com>
Cc: Liu Yun <yun.y.liu@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4533
There are use cases which not all FVs need be migrated from TempRam to
permanent memory before TempRam tears down. This new guid is introduced
to avoid unnecessary FV migration to improve boot performance. Platform
can publish MigrationInfo hob with this guid to customize FV migration
info, and PeiCore will only migrate FVs indicated by this Hob info.
This is a backwards compatible change, PeiCore will check MigrationInfo
hob before migration. If MigrationInfo hobs exists, only migrate FVs
recorded by hobs. If MigrationInfo hobs not exists, migrate all FVs to
permanent memory.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Guomin Jiang <guomin.jiang@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Cheng Sun <chengx.sun@intel.com>
CoreLocateDevicePath is used in CoreInstallMultipleProtocolInterfaces to
check if a Device Path Protocol instance with the same device path is
alreay installed.
CoreLocateDevicePath is a generic API, and would introduce some
unnecessary overhead for such usage.
The optimization is:
1. Implement IsDevicePathInstalled to loop all the Device Path
Protocols installed and check if any of them matchs the given device
path.
2. Replace CoreLocateDevicePath with IsDevicePathInstalled in
CoreInstallMultipleProtocolInterfaces.
This optimization could save several seconds in PCI enumeration on a
system with many PCI devices.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhi Jin <zhi.jin@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Update DumpImageRecord() to be DumpImageRecords(), and improve
the debug output. The function will output at DEBUG_INFO instead,
and the function will be run in DXE and SMM
MAT logic when the MAT is installed at EndOfDxe on DEBUG builds.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Add logic to create and delete image properties records. Where
applicable, redirect existing code to use the new library.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Now that the bugs are fixed in the MAT logic, we can remove the
duplicate logic from PiSmmCore/MemoryAttributesTable.c and use
ImagePropertiesRecordLib instead.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
|4K PAGE|DATA|CODE|DATA|CODE|DATA|4K PAGE|
Say the above memory region is currently one memory map descriptor.
The above image memory layout example contains two code sections
oriented in a way that maximizes the number of descriptors which
would be required to describe each section.
NOTE: It's unlikely that a data section would ever be between
two code sections, but it's still handled by the below formula
for correctness.
There are two code sections (let's say CodeSegmentMax == 2),
three data sections, and two unrelated memory regions flanking the
image. The number of required descriptors to describe this layout
will be 2 * 2 + 3 == 7. This patch updates the calculations to account
for the worst-case scenario.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Move some DXE MAT logic to ImagePropertiesRecordLib to consolidate
code and enable unit testability.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
This patch updates MemoryAttributesTable.c to reduce reliance on global
variables and allow some logic to move to a library.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Add PCD to control if modules with start addresses in PE/COFF > 0x100000
attempt to load at specified address.
If a module has an address in this range and there is untested memory
DxeCore will attempt to promote all memory to tested which bypasses any
memory testing that would occur later in boot.
There are several existing AARCH64 option roms that have base addresses
of 0x180000000.
Signed-off-by: Jeff Brasen <jbrasen@nvidia.com>
Reviewed-by: Ashish Singhal <ashishsingha@nvidia.com>
Message-Id: <bd36c9c24158590db2226ede05cb8c2f50c93a37.1684194452.git.jbrasen@nvidia.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
confroms should be conforms.
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Nate DeSimone <nathaniel.l.desimone@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
When finding a free page range for allocation, if the found range
starts below the tracked memory bin address range, the lowest
memory bin address is updated which will not include the guard page if
present. When CoreConvertPagesWithGuard() is called on the range
being allocated, the memory range is adjusted to include guard
pages which can push it out of the memory bin address range and
cause the memory type statistics to be unaltered.
This patch updates the lowest memory bin address range to account for
the guard page if NeedGuard is TRUE so the memory type statistics
are updated correctly.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Taylor Beebe <t@taylorbeebe.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
FvbDev->LbaCache must be freed on error path before freeing FvbDev.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Mike Maslenkin <mike.maslenkin@gmail.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
FwVolHeader must be freed on error path.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Mike Maslenkin <mike.maslenkin@gmail.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4543
REF: https://uefi.org/specs/UEFI/2.10/07_Services_Boot_Services.html#efi-boot-services-locatehandlebuffer
CoreLocateHandleBuffer() can in certain cases, return an
error and not free an allocated buffer. This scenario
occurs if the first call to InternalCoreLocateHandle()
returns success and the second call returns an error.
On a successful return, LocateHandleBuffer() passes
ownership of the buffer to the caller. However, the UEFI
specification is not explicit about what the expected
ownership of this buffer is in the case of an error.
However, it is heavily implied by the code example given
in section 7.3.15 of v2.10 of the UEFI specificaton that
if LocateHandleBuffer() returns a non-successful status
code then the ownership of the buffer does NOT transfer
to the caller. This code example explicitly refrains from
calling FreePool() if LocateHandleBuffer() returns an
error.
From a practical standpoint, it is logical to assume that
a non-successful status code indicates that no buffer of
handles was ever allocated. Indeed, in most error cases,
LocateHandleBuffer() does not go far enough to get to the
point where a buffer is allocated. Therefore, all existing
users of this API must already be coded to support the case
of a non-successful status code resulting in an invalid
handle buffer being returned. Therefore, this change will
not cause any backwards compatibility issues with existing
code.
In conclusion, this boils down to a fix for a memory leak
that also brings the behavior of our LocateHandleBuffer()
implementation into alignment with the original intentions
of the UEFI specification authors.
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Nate DeSimone <nathaniel.l.desimone@intel.com>
Currently, HeapGuard, when in the GuardAlignedToTail mode, assumes that
the pool head has been allocated in the first page of memory that was
allocated. This is not the case for ARM64 platforms when allocating
runtime pools, as RUNTIME_PAGE_ALLOCATION_GRANULARITY is 64k, unlike
X64, which has RUNTIME_PAGE_ALLOCATION_GRANULARITY as 4k.
When a runtime pool is allocated on ARM64, the minimum number of pages
allocated is 16, to match the runtime granularity. When a small pool is
allocated and GuardAlignedToTail is true, HeapGuard instructs the pool
head to be placed as (MemoryAllocated + EFI_PAGES_TO_SIZE(Number of Pages)
- SizeRequiredForPool).
This gives this scenario:
|Head Guard|Large Free Number of Pages|PoolHead|TailGuard|
When this pool goes to be freed, HeapGuard instructs the pool code to
free from (PoolHead & ~EFI_PAGE_MASK). However, this assumes that the
PoolHead is in the first page allocated, which as shown above is not true
in this case. For the 4k granularity case (i.e. where the correct number of
pages are allocated for this pool), this logic does work.
In this failing case, HeapGuard then instructs the pool code to free 16
(or more depending) pages from the page the pool head was allocated on,
which as seen above means we overrun the pool and attempt to free memory
far past the pool. We end up running into the tail guard and getting an
access flag fault.
This causes ArmVirtQemu to fail to boot with an access flag fault when
GuardAlignedToTail is set to true (and pool guard enabled for runtime
memory). It should also cause all ARM64 platforms to fail in this
configuration, for exactly the same reason, as this is core code making
the assumption.
This patch removes HeapGuard's assumption that the pool head is allocated
on the first page and instead undoes the same logic that HeapGuard did
when allocating the pool head in the first place.
With this patch in place, ArmVirtQemu boots with GuardAlignedToTail
set to true (and when it is false, also).
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4521
Github PR: https://github.com/tianocore/edk2/pull/4731
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
In UnsetGuardPage(), before SmmReadyToLock, remove NX and RO
memory attribute protection for guarded page since
EfiConventionalMemory in SMRAM is RW and executable before
SmmReadyToLock. If UnsetGuardPage() happens after SmmReadyToLock,
then apply EFI_MEMORY_XP to the guarded page to make sure
EfiConventionalMemory in SMRAM is NX since EfiConventionalMemory
in SMRAM is marked as NX in PiSmmCpuDxe driver when SmmReadyToLock.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Now that we have a generic method to manage memory permissions using a
PPI, we can switch to the generic version of the DXE handoff code in
DxeIpl, and drop the ARM specific version.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
If the associated PCD is set to TRUE, use the memory attribute PPI to
remap the stack non-executable. This provides a generic method for doing
so, which will be used by ARM and AArch64 as well once they move to the
generic DxeIpl handoff implementation.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>