Provide an optional method for PEI to declare a specific address
range to use for the Memory Type Information bins. The current
algorithm uses heuristics that tends to place the Memory Type
Information bins in the same location, but memory configuration
changes across boots or algorithm changes across a firmware
updates could potentially change the Memory Type Information bin
location. If the bin locations move across an S4 save/resume
cycle, then the S4 resume may fail. Enabling this feature
increases the number of scenarios that an S4 resume operation
may succeed.
If the HOB List contains a Resource Descriptor HOB that
describes tested system memory and has an Owner GUID of
gEfiMemoryTypeInformationGuid, then use the address range
described by the Resource Descriptor HOB as the preferred
location of the Memory Type Information bins. If this HOB is
not detected, then the current behavior is preserved.
The HOB with an Owner GUID of gEfiMemoryTypeInformationGuid
is ignored for the following conditions:
* The HOB with an Owner GUID of gEfiMemoryTypeInformationGuid
is smaller than the Memory Type Information bins.
* The HOB list contains more than one Resource Descriptor HOB
with an owner GUID of gEfiMemoryTypeInformationGuid.
* The Resource Descriptor HOB with an Owner GUID of
gEfiMemoryTypeInformationGuid is the same Resource Descriptor
HOB that that describes the PHIT memory range.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Aaron Li <aaron.li@intel.com>
Cc: Liu Yun <yun.y.liu@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
When finding a free page range for allocation, if the found range
starts below the tracked memory bin address range, the lowest
memory bin address is updated which will not include the guard page if
present. When CoreConvertPagesWithGuard() is called on the range
being allocated, the memory range is adjusted to include guard
pages which can push it out of the memory bin address range and
cause the memory type statistics to be unaltered.
This patch updates the lowest memory bin address range to account for
the guard page if NeedGuard is TRUE so the memory type statistics
are updated correctly.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Taylor Beebe <t@taylorbeebe.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Currently, HeapGuard, when in the GuardAlignedToTail mode, assumes that
the pool head has been allocated in the first page of memory that was
allocated. This is not the case for ARM64 platforms when allocating
runtime pools, as RUNTIME_PAGE_ALLOCATION_GRANULARITY is 64k, unlike
X64, which has RUNTIME_PAGE_ALLOCATION_GRANULARITY as 4k.
When a runtime pool is allocated on ARM64, the minimum number of pages
allocated is 16, to match the runtime granularity. When a small pool is
allocated and GuardAlignedToTail is true, HeapGuard instructs the pool
head to be placed as (MemoryAllocated + EFI_PAGES_TO_SIZE(Number of Pages)
- SizeRequiredForPool).
This gives this scenario:
|Head Guard|Large Free Number of Pages|PoolHead|TailGuard|
When this pool goes to be freed, HeapGuard instructs the pool code to
free from (PoolHead & ~EFI_PAGE_MASK). However, this assumes that the
PoolHead is in the first page allocated, which as shown above is not true
in this case. For the 4k granularity case (i.e. where the correct number of
pages are allocated for this pool), this logic does work.
In this failing case, HeapGuard then instructs the pool code to free 16
(or more depending) pages from the page the pool head was allocated on,
which as seen above means we overrun the pool and attempt to free memory
far past the pool. We end up running into the tail guard and getting an
access flag fault.
This causes ArmVirtQemu to fail to boot with an access flag fault when
GuardAlignedToTail is set to true (and pool guard enabled for runtime
memory). It should also cause all ARM64 platforms to fail in this
configuration, for exactly the same reason, as this is core code making
the assumption.
This patch removes HeapGuard's assumption that the pool head is allocated
on the first page and instead undoes the same logic that HeapGuard did
when allocating the pool head in the first place.
With this patch in place, ArmVirtQemu boots with GuardAlignedToTail
set to true (and when it is false, also).
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4521
Github PR: https://github.com/tianocore/edk2/pull/4731
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4315
MemoryType of EfiUnacceptedMemoryType should not be allocated in
AllocatePool. Instead it should return EFI_INVALID_PARAMETER.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Michael Roth <michael.roth@amd.com>
Reported-by: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Min Xu <min.m.xu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
The page allocator code in CoreFindFreePagesI() uses a mask derived from
its UINTN Alignment argument to align the descriptor end address of a
MEMORY_MAP entry to the requested alignment, in order to check whether
the descriptor covers enough sufficiently aligned area to satisfy the
request.
However, on 32-bit architectures, 'Alignment' is a 32-bit type, whereas
DescEnd is a 64-bit type, and so the resulting operation performed on
the end address comes down to masking with 0xfffff000 instead of the
intended 0xffffffff_fffff000. Given the -1 at the end of the expression,
the resulting address is 0xffffffff_fffffffff for any descriptor that
ends on a 4G aligned boundary, and this is certainly not what was
intended.
So cast Alignment to UINT64 to ensure that the mask has the right size.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reported-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
RFC: https://bugzilla.tianocore.org/show_bug.cgi?id=3937
Unaccepted memory is a kind of new memory type,
CoreInitializeGcdServices() and CoreGetMemoryMap() are updated to handle
the unaccepted memory type.
Ref: microsoft/mu_basecore@97e9c31
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Min Xu <min.m.xu@intel.com>
REF: https://https://bugzilla.tianocore.org/show_bug.cgi?id=3795
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Updated CoreInternalAllocatePages() to call PromoteMemoryResource() and
re-attempt the allocation if unable to convert the specified memory range
Signed-off-by: Stacy Howell <stacy.howell@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3737
Apply uncrustify changes to .c/.h files in the MdeModulePkg package
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3739
Update all use of EFI_D_* defines in DEBUG() macros to DEBUG_* defines.
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
OSs are now capable of treating SP and CRYPTO memory as true capabilities
and therefore these should be exposed. This requires usage of a separate
ACCESS_MASK to hide all page-access permission capabilities.
Change in masking and hiding of SP and CRYPTO was introduced in
3bd5c994c8
Signed-off-by: Malgorzata Kukiello <jacek.kukiello@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Hao A Wu <hao.a.wu@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Oleksiy Yakovlev <oleksiyy@ami.com>
Cc: Ard Biesheuvel (ARM address) <ard.biesheuvel@arm.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
MapMemory is not initialized by FindGuardedMemoryMap
or CoreInternalAllocatePages which calls MapMemory.
So we give a 0 to it.
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>
The Fpdt driver (FirmwarePerformanceDxe) saves a memory address across
reboots, and then does an AllocatePage for that memory address.
If, on this boot, that memory comes from a Runtime memory bucket,
the MAT table is not updated. This causes Windows to boot into Recovery.
This patch blocks the memory manager from changing the page
from a special bucket to a different memory type. Once the buckets are
allocated, we freeze the memory ranges for the OS, and fragmenting
the special buckets will cause errors resuming from hibernate (S4).
The references to S4 here are the use case that fails. This
failure is root caused to an inconsistent behavior of the
core memory services themselves when type AllocateAddress is used.
The main issue is apparently with the UEFI memory map -- the UEFI memory
map reflects the pre-allocated bins, but the actual allocations at fixed
addresses may go out of sync with that. Everything else, such as:
- EFI_MEMORY_ATTRIBUTES_TABLE (page protections) being out of sync,
- S4 failing
are just symptoms / consequences.
This patch is cherry pick from Project Mu:
a9be767d9b
With the minor change,
1. Update commit message format to keep the message in 80 characters one line.
2. Remove // MU_CHANGE comments in source code.
3. Update comments style to follow edk2 style.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Hao A Wu <hao.a.wu@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Liming Gao <liming.gao@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Dandan Bi <dandan.bi@intel.com>
Acked-by: Hao A Wu <hao.a.wu@intel.com>
Take MAX_ALLOC_ADDRESS into account in the implementation of the
page allocation routines, so that they will only return memory
that is addressable by the CPU at boot time, even if more memory
is available in the GCD memory map.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1277
The failure is caused by data type conversion between UINTN and UINT64,
which is checked in at 63ebde8ef6.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
which is illegal access to memory which has been freed. The principle
behind is similar to pool guard feature, that is we'll turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed.
This also implies that, once a page is allocated and freed, it cannot
be re-allocated. This will bring another issue, which is that there's
risk that memory space will be used out. To address it, the memory
service add logic to put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages which haven been freed.
This feature brings another problem is that memory map descriptors will
be increased enormously (200+ -> 2000+). One of change in this patch
is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
freed pages back into the memory map. Now the number can stay at around
510.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Now that Itanium support has been dropped, we can remove the various
occurrences of the ELILO on Itanium PE/COFF header workaround.
Link: https://bugzilla.tianocore.org/show_bug.cgi?id=816
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
The functions that are never called have been removed.
They are ClearGuardMapBit,SetGuardMapBit,IsHeadGuard,
IsTailGuard and CoreEfiNotAvailableYetArg0.
https://bugzilla.tianocore.org/show_bug.cgi?id=1062
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: shenglei <shenglei.zhang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
1. Do not use tab characters
2. No trailing white space in one line
3. All files must end with CRLF
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
CpuDxe driver is updated to be able to access DXE page table in SMM mode,
which means Heap Guard can get correct memory paging attributes in what
environment. It's not necessary to exclude SMM from detecting Heap Guard
feature support.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Heap Guard feature needs enough memory and paging to work. Otherwise
calling SetMemoryAttributes to change page attribute will fail. This
patch add necessary check of result of calling SetMemoryAttributes.
This can help users to debug their problem in enabling this feature.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
If given address is on 64K boundary and the requested bit number is 64,
all SetBits(), ClearBits() and GetBits() will encounter ASSERT problem
in trying to do a 64 bits of shift, which is not allowed by LShift() and
RShift(). This patch tries to fix this issue by turning bits operation
into whole integer operation in such situation.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Due to the fact that HeapGuard needs CpuArchProtocol to update page
attributes, the feature is normally enabled after CpuArchProtocol is
installed. Since there're some drivers are loaded before CpuArchProtocl,
they cannot make use HeapGuard feature to detect potential issues.
This patch fixes above situation by updating the DXE core to skip the
NULL check against global gCpu in the IsMemoryTypeToGuard(), and adding
NULL check against gCpu in SetGuardPage() and UnsetGuardPage() to make
sure that they can be called but do nothing. This will allow HeapGuard to
record all guarded memory without setting the related Guard pages to not-
present.
Once the CpuArchProtocol is installed, a protocol notify will be called
to complete the work of setting Guard pages to not-present.
Please note that above changes will cause a #PF in GCD code during cleanup
of map entries, which is initiated by CpuDxe driver to update real mtrr
and paging attributes back to GCD. During that time, CpuDxe doesn't allow
GCD to update memory attributes and then any Guard page cannot be unset.
As a result, this will prevent Guarded memory from freeing during memory
map cleanup.
The solution is to avoid allocating guarded memory as memory map entries
in GCD code. It's done by setting global mOnGuarding to TRUE before memory
allocation and setting it back to FALSE afterwards in GCD function
CoreAllocateGcdMapEntry().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
There're two ASSERT issues which will be triggered by boot loader of
Windows 10.
The first is caused by allocating memory in heap guard during another
memory allocation, which is not allowed in DXE core. Avoiding reentry
of memory allocation has been considered in heap guard feature. But
there's a hole in the code of function FindGuardedMemoryMap(). The fix
is adding AllocMapUnit parameter in the condition of while(), which
will prevent memory allocation from happenning during Guard page
check operation.
The second is caused by the core trying to allocate page 0 with Guard
page, which will cause the start address rolling back to the end of
supported system address. According to the requirement of heap guard,
the fix is just simply skipping the free memory at page 0 and let
the core continue searching free memory after it.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The root cause is an unnecessary check to Size parameter in function
AdjustMemoryS(). It will cause one standalone free page (happen to have
Guard page around) in the free memory list cannot be allocated, even if
the requested memory size is less than a page.
//
// At least one more page needed for Guard page.
//
if (Size < (SizeRequested + EFI_PAGES_TO_SIZE (1))) {
return 0;
}
The following code in the same function actually covers above check
implicitly. So the fix is simply removing above check.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Considering following scenario (both NX memory protection and heap guard
are enabled):
1. Allocate 3 pages. The attributes of adjacent memory pages will be
|NOT-PRESENT| present | present | present |NOT-PRESENT|
2. Free the middle page. The attributes of adjacent memory pages should be
|NOT-PRESENT| present |NOT-PRESENT| present |NOT-PRESENT|
But the NX feature will overwrite the attributes of middle page. So it
looks still like below, which is wrong.
|NOT-PRESENT| present | PRESENT | present |NOT-PRESENT|
The solution is checking the first and/or last page of a memory block to be
marked as NX, and skipping them if they are Guard pages.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This issue is a regression one caused by a patch at
425d25699b
That fix didn't take the 0 page to free into account, which still
needs to call UnsetGuardPage() even no memory needs to free.
The fix is just moving the calling of UnsetGuardPage() to the place
right after calling AdjustMemoryF().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This hole will cause page fault randomly. The root cause is that Guard
page, which is just freed back to page pool but not yet cleared not-
present attribute, will be allocated right away by internal function
CoreFreeMemoryMapStack(). The solution to this issue is to clear the
not-present attribute for freed Guard page before doing any free
operation, instead of after those operation.
The reason we didn't do this before is due to the fact that manipulating
page attributes might cause memory allocation action which would cause a
dead lock inside a memory allocation/free operation. So we always set or
unset Guard page outside the memory lock. After a thorough analysis, we
believe clearing a Guard page will not cause memory allocation because
memory we're to manipulate was already manipulated before for sure.
Therefore there should be no memory allocation occurring in this
situation.
Since we cleared Guard page not-present attribute before freeing instead
of after freeing, the debug code to clear freed memory can now be restored
to its original way (aka no checking and bypassing Guard page).
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Three issues addressed here:
a. Make NX memory protection and heap guard to be compatible
The solution is to check PcdDxeNxMemoryProtectionPolicy in Heap Guard to
see if the free memory should be set to NX, and set the Guard page to NX
before it's freed back to memory pool. This can solve the issue which NX
setting would be overwritten by Heap Guard feature in certain
configuration.
b. Returned pool address was not 8-byte aligned sometimes
This happened only when BIT7 is not set in PcdHeapGuardPropertyMask. Since
8-byte alignment is UEFI spec required, letting allocated pool adjacent to
tail guard page cannot be guaranteed.
c. NULL address handling due to allocation failure
When allocation failure, normally a NULL will be returned. But Heap Guard
code will still try to adjust the starting address of it, which will cause
a non-NULL pointer returned.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
One issue is that macros defined in HeapGuard.h
GUARD_HEAP_TYPE_PAGE
GUARD_HEAP_TYPE_POOL
doesn't match the definition of PCD PcdHeapGuardPropertyMask in
MdeModulePkg.dec. This patch fixed it by exchanging the BIT0 and BIT1
of them.
Another is that method AdjustMemoryF() will return a bigger NumberOfPages than
the value passed in. This is caused by counting twice of a shared Guard page
which can be used for both head and tail Guard of the memory before it and
after it. This happens only when partially freeing just one page in the middle
of a bunch of allocated pages. The freed page should be turned into a new
Guard page.
Cc: Jie Lin <jie.lin@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Once the paging capabilities were filtered out, there might be some adjacent entries
sharing the same capabilities. It's recommended to merge those entries for the OS
compatibility purpose.
This patch makes use of existing method MergeMemoryMap() to do it. This is done by
simply turning this method from static to extern, and call it after filter code.
This patch is related to an issue described at
https://bugzilla.tianocore.org/show_bug.cgi?id=753
This patch is also passed test of booting follow OSs:
Windows 10
Windows Server 2016
Fedora 26
Fedora 25
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Some OSs will treat EFI_MEMORY_DESCRIPTOR.Attribute as really
set attributes and change memory paging attribute accordingly.
But current EFI_MEMORY_DESCRIPTOR.Attribute is assigned by
value from Capabilities in GCD memory map. This might cause
boot problems. Clearing all paging related capabilities can
workaround it. The code added in this patch is supposed to
be removed once the usage of EFI_MEMORY_DESCRIPTOR.Attribute
is clarified in UEFI spec and adopted by both EDK-II Core and
all supported OSs.
Laszlo did a thorough test on OVMF emulated platform. The details
can be found at
https://bugzilla.tianocore.org/show_bug.cgi?id=753#c10
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
In the method DumpGuardedMemoryBitmap() and SetAllGuardPages(), the code
didn't check if the global mMapLevel is legal value or not, which leaves
a logic hole causing potential array overflow in code followed.
This patch adds sanity check before any array reference in those methods.
Cc: Wu Hao <hao.a.wu@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Wu Hao <hao.a.wu@intel.com>
The build error is introduced by following check in:
2930ef9809235a4490c8
The Visual Studio older than 2015 doesn't support constant integer
in binary format (0bxxx). This patch changes them to BIT macro to
fix it. This patch also cleans up coding style about unmatched
comment for return value.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Bi Dandan <dandan.bi@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This feature makes use of paging mechanism to add a hidden (not present)
page just before and after the allocated memory block. If the code tries
to access memory outside of the allocated part, page fault exception will
be triggered.
This feature is controlled by three PCDs:
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType
BIT0 and BIT1 of PcdHeapGuardPropertyMask can be used to enable or disable
memory guard for page and pool respectively. PcdHeapGuardPoolType and/or
PcdHeapGuardPageType are used to enable or disable guard for specific type
of memory. For example, we can turn on guard only for EfiBootServicesData
and EfiRuntimeServicesData by setting the PCD with value 0x50.
Pool memory is not ususally integer multiple of one page, and is more likely
less than a page. There's no way to monitor the overflow at both top and
bottom of pool memory. BIT7 of PcdHeapGuardPropertyMask is used to control
how to position the head of pool memory so that it's easier to catch memory
overflow in memory growing direction or in decreasing direction.
Note1: Turning on heap guard, especially pool guard, will introduce too many
memory fragments. Windows 10 has a limitation in its boot loader, which
accepts at most 512 memory descriptors passed from BIOS. This will prevent
Windows 10 from booting if heap guard is enabled. The latest Linux
distribution with grub boot loader has no such issue. Normally it's not
recommended to enable this feature in production build of BIOS.
Note2: Don't enable this feature for NT32 emulation platform which doesn't
support paging.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
One of issue caused by enabling NULL pointer detection is that some PCI
device OptionROM, binary drivers and binary OS boot loaders may have NULL
pointer access bugs, which will prevent BIOS from booting and is almost
impossible to fix. BIT7 of PCD PcdNullPointerDetectionPropertyMask is used
as a workaround to indicate BIOS to disable NULL pointer detection right
after event gEfiEndOfDxeEventGroupGuid, and then let boot continue.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
When double free pages by FreePages() or allocate allocated pages by
AllocatePages() with AllocateAddress type, the code will print debug
message "ConvertPages: Incompatible memory types", but the debug
message is not very obvious for the error paths by FreePages() or
AllocatePages().
Refer https://lists.01.org/pipermail/edk2-devel/2017-August/013075.html
for the discussion.
This patch is to enhance the debug message for the error paths by
FreePages() or AllocatePages.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
When attempting to perform page allocations using AllocateAddress, we
fail to check whether the entire region is free before splitting the
region. This may lead to memory being leaked further into the routine,
when it turns out that one of the memory map entries intersected by the
region is already occupied. In this case, prior conversions are not rolled
back.
For instance, starting from this situation
0x000040000000-0x00004007ffff [ConventionalMemory ]
0x000040080000-0x00004009ffff [Boot Data ]
0x0000400a0000-0x000047ffffff [ConventionalMemory ]
a failed EfiLoaderData allocation @ 0x40000000 that covers the BootData
region will fail, but leave the first part of the allocation converted,
so we end up with
0x000040000000-0x00004007ffff [Loader Data ]
0x000040080000-0x00004009ffff [Boot Data ]
0x0000400a0000-0x000047ffffff [ConventionalMemory ]
even though the AllocatePages() call returned an error.
So let's check beforehand that AllocateAddress allocations are covered
by a single memory map entry, so that it either succeeds or fails
completely, rather than leaking allocations.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Star Zeng <star.zeng@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=370
Use GLOBAL_REMOVE_IF_UNREFERENCED for some memory profile global variables,
then their symbols could be removed when memory profile is disabled.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Remove the local definitions for the default and runtime page allocation
granularity macros, and switch to the new MdePkg versions.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
This implements a DXE memory protection policy that ensures that regions
that don't require executable permissions are mapped with the non-exec
attribute set.
First of all, it iterates over all entries in the UEFI memory map, and
removes executable permissions according to the configured DXE memory
protection policy, as recorded in PcdDxeNxMemoryProtectionPolicy.
Secondly, it sets or clears the non-executable attribute when allocating
or freeing pages, both for page based or pool based allocations.
Note that this complements the image protection facility, which applies
strict permissions to BootServicesCode/RuntimeServicesCode regions when
the section alignment allows it. The memory protection configured by this
patch operates on non-code regions only.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
In preparation of adding memory permission attribute management to
the pool allocator, split off the locking of the pool metadata into
a separate lock. This is an improvement in itself, given that pool
allocations can only interfere with the page allocation bookkeeping
if pool pages are allocated or released. But it is also required to
ensure that the permission attribute management does not deadlock,
given that it may trigger page table splits leading to additional
page tables being allocated.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>