Provide an optional method for PEI to declare a specific address
range to use for the Memory Type Information bins. The current
algorithm uses heuristics that tends to place the Memory Type
Information bins in the same location, but memory configuration
changes across boots or algorithm changes across a firmware
updates could potentially change the Memory Type Information bin
location. If the bin locations move across an S4 save/resume
cycle, then the S4 resume may fail. Enabling this feature
increases the number of scenarios that an S4 resume operation
may succeed.
If the HOB List contains a Resource Descriptor HOB that
describes tested system memory and has an Owner GUID of
gEfiMemoryTypeInformationGuid, then use the address range
described by the Resource Descriptor HOB as the preferred
location of the Memory Type Information bins. If this HOB is
not detected, then the current behavior is preserved.
The HOB with an Owner GUID of gEfiMemoryTypeInformationGuid
is ignored for the following conditions:
* The HOB with an Owner GUID of gEfiMemoryTypeInformationGuid
is smaller than the Memory Type Information bins.
* The HOB list contains more than one Resource Descriptor HOB
with an owner GUID of gEfiMemoryTypeInformationGuid.
* The Resource Descriptor HOB with an Owner GUID of
gEfiMemoryTypeInformationGuid is the same Resource Descriptor
HOB that that describes the PHIT memory range.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Aaron Li <aaron.li@intel.com>
Cc: Liu Yun <yun.y.liu@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Update the DxeMain initialization order to initialize GCD
services before any runtime allocations are performed. This
is required to prevent runtime data fragmentation when the
UEFI System Table and UEFI Runtime Service Table are allocated
before both the memory and GCD services are initialized.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Aaron Li <aaron.li@intel.com>
Cc: Liu Yun <yun.y.liu@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4533
There are use cases which not all FVs need be migrated from TempRam to
permanent memory before TempRam tears down. This new guid is introduced
to avoid unnecessary FV migration to improve boot performance. Platform
can publish MigrationInfo hob with this guid to customize FV migration
info, and PeiCore will only migrate FVs indicated by this Hob info.
This is a backwards compatible change, PeiCore will check MigrationInfo
hob before migration. If MigrationInfo hobs exists, only migrate FVs
recorded by hobs. If MigrationInfo hobs not exists, migrate all FVs to
permanent memory.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Guomin Jiang <guomin.jiang@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Cheng Sun <chengx.sun@intel.com>
CoreLocateDevicePath is used in CoreInstallMultipleProtocolInterfaces to
check if a Device Path Protocol instance with the same device path is
alreay installed.
CoreLocateDevicePath is a generic API, and would introduce some
unnecessary overhead for such usage.
The optimization is:
1. Implement IsDevicePathInstalled to loop all the Device Path
Protocols installed and check if any of them matchs the given device
path.
2. Replace CoreLocateDevicePath with IsDevicePathInstalled in
CoreInstallMultipleProtocolInterfaces.
This optimization could save several seconds in PCI enumeration on a
system with many PCI devices.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhi Jin <zhi.jin@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Update DumpImageRecord() to be DumpImageRecords(), and improve
the debug output. The function will output at DEBUG_INFO instead,
and the function will be run in DXE and SMM
MAT logic when the MAT is installed at EndOfDxe on DEBUG builds.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Add logic to create and delete image properties records. Where
applicable, redirect existing code to use the new library.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Now that the bugs are fixed in the MAT logic, we can remove the
duplicate logic from PiSmmCore/MemoryAttributesTable.c and use
ImagePropertiesRecordLib instead.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
|4K PAGE|DATA|CODE|DATA|CODE|DATA|4K PAGE|
Say the above memory region is currently one memory map descriptor.
The above image memory layout example contains two code sections
oriented in a way that maximizes the number of descriptors which
would be required to describe each section.
NOTE: It's unlikely that a data section would ever be between
two code sections, but it's still handled by the below formula
for correctness.
There are two code sections (let's say CodeSegmentMax == 2),
three data sections, and two unrelated memory regions flanking the
image. The number of required descriptors to describe this layout
will be 2 * 2 + 3 == 7. This patch updates the calculations to account
for the worst-case scenario.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Move some DXE MAT logic to ImagePropertiesRecordLib to consolidate
code and enable unit testability.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
This patch updates MemoryAttributesTable.c to reduce reliance on global
variables and allow some logic to move to a library.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Taylor Beebe <taylor.d.beebe@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Add PCD to control if modules with start addresses in PE/COFF > 0x100000
attempt to load at specified address.
If a module has an address in this range and there is untested memory
DxeCore will attempt to promote all memory to tested which bypasses any
memory testing that would occur later in boot.
There are several existing AARCH64 option roms that have base addresses
of 0x180000000.
Signed-off-by: Jeff Brasen <jbrasen@nvidia.com>
Reviewed-by: Ashish Singhal <ashishsingha@nvidia.com>
Message-Id: <bd36c9c24158590db2226ede05cb8c2f50c93a37.1684194452.git.jbrasen@nvidia.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
confroms should be conforms.
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Nate DeSimone <nathaniel.l.desimone@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
When finding a free page range for allocation, if the found range
starts below the tracked memory bin address range, the lowest
memory bin address is updated which will not include the guard page if
present. When CoreConvertPagesWithGuard() is called on the range
being allocated, the memory range is adjusted to include guard
pages which can push it out of the memory bin address range and
cause the memory type statistics to be unaltered.
This patch updates the lowest memory bin address range to account for
the guard page if NeedGuard is TRUE so the memory type statistics
are updated correctly.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Taylor Beebe <t@taylorbeebe.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
FvbDev->LbaCache must be freed on error path before freeing FvbDev.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Mike Maslenkin <mike.maslenkin@gmail.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
FwVolHeader must be freed on error path.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Mike Maslenkin <mike.maslenkin@gmail.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4543
REF: https://uefi.org/specs/UEFI/2.10/07_Services_Boot_Services.html#efi-boot-services-locatehandlebuffer
CoreLocateHandleBuffer() can in certain cases, return an
error and not free an allocated buffer. This scenario
occurs if the first call to InternalCoreLocateHandle()
returns success and the second call returns an error.
On a successful return, LocateHandleBuffer() passes
ownership of the buffer to the caller. However, the UEFI
specification is not explicit about what the expected
ownership of this buffer is in the case of an error.
However, it is heavily implied by the code example given
in section 7.3.15 of v2.10 of the UEFI specificaton that
if LocateHandleBuffer() returns a non-successful status
code then the ownership of the buffer does NOT transfer
to the caller. This code example explicitly refrains from
calling FreePool() if LocateHandleBuffer() returns an
error.
From a practical standpoint, it is logical to assume that
a non-successful status code indicates that no buffer of
handles was ever allocated. Indeed, in most error cases,
LocateHandleBuffer() does not go far enough to get to the
point where a buffer is allocated. Therefore, all existing
users of this API must already be coded to support the case
of a non-successful status code resulting in an invalid
handle buffer being returned. Therefore, this change will
not cause any backwards compatibility issues with existing
code.
In conclusion, this boils down to a fix for a memory leak
that also brings the behavior of our LocateHandleBuffer()
implementation into alignment with the original intentions
of the UEFI specification authors.
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Nate DeSimone <nathaniel.l.desimone@intel.com>
Currently, HeapGuard, when in the GuardAlignedToTail mode, assumes that
the pool head has been allocated in the first page of memory that was
allocated. This is not the case for ARM64 platforms when allocating
runtime pools, as RUNTIME_PAGE_ALLOCATION_GRANULARITY is 64k, unlike
X64, which has RUNTIME_PAGE_ALLOCATION_GRANULARITY as 4k.
When a runtime pool is allocated on ARM64, the minimum number of pages
allocated is 16, to match the runtime granularity. When a small pool is
allocated and GuardAlignedToTail is true, HeapGuard instructs the pool
head to be placed as (MemoryAllocated + EFI_PAGES_TO_SIZE(Number of Pages)
- SizeRequiredForPool).
This gives this scenario:
|Head Guard|Large Free Number of Pages|PoolHead|TailGuard|
When this pool goes to be freed, HeapGuard instructs the pool code to
free from (PoolHead & ~EFI_PAGE_MASK). However, this assumes that the
PoolHead is in the first page allocated, which as shown above is not true
in this case. For the 4k granularity case (i.e. where the correct number of
pages are allocated for this pool), this logic does work.
In this failing case, HeapGuard then instructs the pool code to free 16
(or more depending) pages from the page the pool head was allocated on,
which as seen above means we overrun the pool and attempt to free memory
far past the pool. We end up running into the tail guard and getting an
access flag fault.
This causes ArmVirtQemu to fail to boot with an access flag fault when
GuardAlignedToTail is set to true (and pool guard enabled for runtime
memory). It should also cause all ARM64 platforms to fail in this
configuration, for exactly the same reason, as this is core code making
the assumption.
This patch removes HeapGuard's assumption that the pool head is allocated
on the first page and instead undoes the same logic that HeapGuard did
when allocating the pool head in the first place.
With this patch in place, ArmVirtQemu boots with GuardAlignedToTail
set to true (and when it is false, also).
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4521
Github PR: https://github.com/tianocore/edk2/pull/4731
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
In UnsetGuardPage(), before SmmReadyToLock, remove NX and RO
memory attribute protection for guarded page since
EfiConventionalMemory in SMRAM is RW and executable before
SmmReadyToLock. If UnsetGuardPage() happens after SmmReadyToLock,
then apply EFI_MEMORY_XP to the guarded page to make sure
EfiConventionalMemory in SMRAM is NX since EfiConventionalMemory
in SMRAM is marked as NX in PiSmmCpuDxe driver when SmmReadyToLock.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Now that we have a generic method to manage memory permissions using a
PPI, we can switch to the generic version of the DXE handoff code in
DxeIpl, and drop the ARM specific version.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
If the associated PCD is set to TRUE, use the memory attribute PPI to
remap the stack non-executable. This provides a generic method for doing
so, which will be used by ARM and AArch64 as well once they move to the
generic DxeIpl handoff implementation.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
The Risc-V and LoongArch specific versions of the DXE core handoff code
in DxeIpl are essentially copies of the EBC version (modulo the
copyright in the header and some debug prints in the code).
In preparation for introducing a generic PPI based method to implement
the non-executable stack, let's merge these versions, so we only need to
add this logic once.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
SmmDriverDispatchHandler is the routine that dispatches SMM drivers
from FV. It's a time-consuming routine.
Add perf-logging for this routine.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Following procedures are perf-logged:
* SmmReadyToBootHandler
* SmmReadyToLockHandler
* SmmEndOfDxeHandler
* SmmEntryPoint
(It's the main routine run in BSP when SMI happens.)
* SmiManage
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
System paging 5 level enabled or not can be checked via CR4.LA57, system
preferred Page table Level (PcdUse5LevelPageTable) must align with previous
level for 64bit long mode.
This patch is to do the wise check:
If cpu has already run in 64bit long mode PEI, Page table Level in DXE
must align with previous level.
If cpu runs in 32bit protected mode PEI, Page table Level in DXE is decided
by PCD and feature capability.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4438
The main dispatch loop in PeiDispatcher() goes through each FV and
calls DiscoverPeimsAndOrderWithApriori() to search Apriori file to
reorder all PEIMs then do the PEIM dispatched.
DiscoverPeimsAndOrderWithApriori() calculates Apriori file count for
every FV once and set Private->AprioriCount, but Private->AprioriCount
doesn't be set to 0 before dispatch loop walking through the next FV.
It causes the peim which sort on less than Private->AprioriCount and
depex is not satisfied would be dispatched when dispatch loop go through
to a scaned FV, even the peim is not set in APRIORI file.
Cc: Leon Chen <leon.chen@insyde.com>
Cc: Tim Lewis <tim.lewis@insyde.com>
Reported-by: Esther Lee <esther.lee@insyde.com>
Signed-off-by: Wendy Liao <wendy.liao@insyde.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
__FUNCTION__ is a pre-standard extension that gcc and Visual C++ among
others support, while __func__ was standardized in C99.
Since it's more standard, replace __FUNCTION__ with __func__ throughout
MdeModulePkg.
Signed-off-by: Rebecca Cran <rebecca@bsdio.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4405
The memory attributes table has been extended with a flag that indicates
whether or not the OS is permitted to map the EFI runtime code regions
with strict enforcement for IBT/BTI landing pad instructions.
Given that the PE/COFF spec now defines a DllCharacteristicsEx flag that
indicates whether or not a loaded image is compatible with this, we can
wire this up to the flag in the memory attributes table, and set it if
all loaded runtime image are compatible with it.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
UEFI v2.10 introduces a new flag to the memory attributes table to
inform the OS whether or not runtime services code regions were emitted
by the compiler with guard instructions for forward edge control flow
integrity enforcement.
So update our definition accordingly.
Link: https://uefi.org/specs/UEFI/2.10/04_EFI_System_Table.html#efi-memory-attributes-table
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Acked-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Oliver Smith-Denny <osd@smith-denny.com>
This fixes messages like:
"Image type AARCH64 can't be loaded on <Unknown> UEFI system"
Cc: Daniel Schaefer <git@danielschaefer.me>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
Signed-off-by: Andrei Warkentin <andrei.warkentin@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4315
MemoryType of EfiUnacceptedMemoryType should not be allocated in
AllocatePool. Instead it should return EFI_INVALID_PARAMETER.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Michael Roth <michael.roth@amd.com>
Reported-by: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Min Xu <min.m.xu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Location of notification is has been specified in UEFI v2.9.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: "Min M. Xu" <min.m.xu@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: "Michael D. Kinney" <michael.d.kinney@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Acked-by: Jiewen Yao <Jiewen.yao@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Dionna Glaze <dionnaglaze@google.com>
Message-Id: <20221108164616.3251967-4-dionnaglaze@google.com>
The page allocator code in CoreFindFreePagesI() uses a mask derived from
its UINTN Alignment argument to align the descriptor end address of a
MEMORY_MAP entry to the requested alignment, in order to check whether
the descriptor covers enough sufficiently aligned area to satisfy the
request.
However, on 32-bit architectures, 'Alignment' is a 32-bit type, whereas
DescEnd is a 64-bit type, and so the resulting operation performed on
the end address comes down to masking with 0xfffff000 instead of the
intended 0xffffffff_fffff000. Given the -1 at the end of the expression,
the resulting address is 0xffffffff_fffffffff for any descriptor that
ends on a 4G aligned boundary, and this is certainly not what was
intended.
So cast Alignment to UINT64 to ensure that the mask has the right size.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reported-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3387
Added use of SafeIntLib to validate values are not causing overflows or
underflows in user controlled values when calculating buffer sizes.
Signed-off-by: Miki Demeter <miki.demeter@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
RFC: https://bugzilla.tianocore.org/show_bug.cgi?id=3937
Unaccepted memory is a kind of new memory type,
CoreInitializeGcdServices() and CoreGetMemoryMap() are updated to handle
the unaccepted memory type.
Ref: microsoft/mu_basecore@97e9c31
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Min Xu <min.m.xu@intel.com>
Updates debug macros in the package that have an imbalanced number
of print specifiers to arguments. These changes try to preserve
what was likely intended by the author. In cases information was
missing due to the bug, the specifier may be removed since it was
not previously accurately printing the expected value.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Guomin Jiang <guomin.jiang@intel.com>
Cc: Hao A Wu <hao.a.wu@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Hao A Wu <hao.a.wu@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Remove clearing CR0.WP when marking the memory used for page table
as read-only in the page table itself created by DxeIpl. This page
table address is written to Cr3 after these protection steps. Till
this, the memory used for page table is always RW.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Hide the Exception implementation details in CpuExcetionHandlerLib and
caller only need to provide buffer
Cc: Eric Dong <eric.dong@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Reviewed-by: Sami Mujawar <sami.mujawar@arm.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
REF: https://https://bugzilla.tianocore.org/show_bug.cgi?id=3795
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Updated CoreInternalAllocatePages() to call PromoteMemoryResource() and
re-attempt the allocation if unable to convert the specified memory range
Signed-off-by: Stacy Howell <stacy.howell@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Commit e7abb94d1 removed InitializeCpuExceptionHandlersEx
and updated DxeMain to call InitializeCpuExceptionHandlers
for exception setup. But the old behavior that calls *Ex() sets
up the stack guard as well. To match the old behavior,
the patch calls InitializeSeparateExceptionStacks.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Today InitializeCpuExceptionHandlersEx is called from three modules:
1. DxeCore (links to DxeCpuExceptionHandlerLib)
DxeCore expects it initializes the IDT entries as well as
assigning separate stacks for #DF and #PF.
2. CpuMpPei (links to PeiCpuExceptionHandlerLib)
and CpuDxe (links to DxeCpuExceptionHandlerLib)
It's called for each thread for only assigning separate stacks for
#DF and #PF. The IDT entries initialization is skipped because
caller sets InitData->X64.InitDefaultHandlers to FALSE.
Additionally, SecPeiCpuExceptionHandlerLib, SmmCpuExceptionHandlerLib
also implement such API and the behavior of the API is simply to initialize
IDT entries only.
Because it mixes the IDT entries initialization and separate stacks
assignment for certain exception handlers together, in order to know
whether the function call only initializes IDT entries, or assigns stacks,
we need to check:
1. value of InitData->X64.InitDefaultHandlers
2. library instance
This patch cleans up the code to separate the stack assignment to a new API:
InitializeSeparateExceptionStacks().
Only when caller calls the new API, the separate stacks are assigned.
With this change, the SecPei and Smm instance can return unsupported which
gives caller a very clear status.
The old API InitializeCpuExceptionHandlersEx() is removed in this patch.
Because no platform module is consuming the old API, the impact is none.
Signed-off-by: Ray Ni <ray.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3488
Current free pool routine from PiSmmCore will inspect memory guard status
for target buffer without considering pool headers. This could lead to
`IsMemoryGuarded` function to return incorrect results.
In that sense, allocating a 0 sized pool could cause an allocated buffer
directly points into a guard page, which is legal. However, trying to
free this pool will cause the routine changed in this commit to read XP
pages, which leads to page fault.
This change will inspect memory guarded with pool headers. This can avoid
errors when a pool content happens to be on a page boundary.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Kun Qin <kuqin12@gmail.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008
Correct the logic about whether 5-level paging is supported.
Signed-off-by: Jason Lou <yun.lou@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
When setting mVirtualMap to NULL also set mVirtualMapMaxIndex to 0.
Without that RuntimeDriverConvertPointer() will go search the ZeroPage
for EFI_MEMORY_DESCRIPTOR entries.
In case mVirtualMapMaxIndex happens to be small small enough that'll go
unnoticed, the search will not find anything and EFI_NOT_FOUND will be
returned.
In case mVirtualMapMaxIndex is big enough the search will reach the end
of the ZeroPage and trigger a page fault.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3737
Apply uncrustify changes to .c/.h files in the MdeModulePkg package
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3767
Update use of DEBUG_CODE(Expression) if Expression is a complex code
block with if/while/for/case statements that use {}.
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>