1. Do not use tab characters
2. No trailing white space in one line
3. All files must end with CRLF
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
CpuDxe driver is updated to be able to access DXE page table in SMM mode,
which means Heap Guard can get correct memory paging attributes in what
environment. It's not necessary to exclude SMM from detecting Heap Guard
feature support.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Heap Guard feature needs enough memory and paging to work. Otherwise
calling SetMemoryAttributes to change page attribute will fail. This
patch add necessary check of result of calling SetMemoryAttributes.
This can help users to debug their problem in enabling this feature.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
If given address is on 64K boundary and the requested bit number is 64,
all SetBits(), ClearBits() and GetBits() will encounter ASSERT problem
in trying to do a 64 bits of shift, which is not allowed by LShift() and
RShift(). This patch tries to fix this issue by turning bits operation
into whole integer operation in such situation.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Due to the fact that HeapGuard needs CpuArchProtocol to update page
attributes, the feature is normally enabled after CpuArchProtocol is
installed. Since there're some drivers are loaded before CpuArchProtocl,
they cannot make use HeapGuard feature to detect potential issues.
This patch fixes above situation by updating the DXE core to skip the
NULL check against global gCpu in the IsMemoryTypeToGuard(), and adding
NULL check against gCpu in SetGuardPage() and UnsetGuardPage() to make
sure that they can be called but do nothing. This will allow HeapGuard to
record all guarded memory without setting the related Guard pages to not-
present.
Once the CpuArchProtocol is installed, a protocol notify will be called
to complete the work of setting Guard pages to not-present.
Please note that above changes will cause a #PF in GCD code during cleanup
of map entries, which is initiated by CpuDxe driver to update real mtrr
and paging attributes back to GCD. During that time, CpuDxe doesn't allow
GCD to update memory attributes and then any Guard page cannot be unset.
As a result, this will prevent Guarded memory from freeing during memory
map cleanup.
The solution is to avoid allocating guarded memory as memory map entries
in GCD code. It's done by setting global mOnGuarding to TRUE before memory
allocation and setting it back to FALSE afterwards in GCD function
CoreAllocateGcdMapEntry().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
There're two ASSERT issues which will be triggered by boot loader of
Windows 10.
The first is caused by allocating memory in heap guard during another
memory allocation, which is not allowed in DXE core. Avoiding reentry
of memory allocation has been considered in heap guard feature. But
there's a hole in the code of function FindGuardedMemoryMap(). The fix
is adding AllocMapUnit parameter in the condition of while(), which
will prevent memory allocation from happenning during Guard page
check operation.
The second is caused by the core trying to allocate page 0 with Guard
page, which will cause the start address rolling back to the end of
supported system address. According to the requirement of heap guard,
the fix is just simply skipping the free memory at page 0 and let
the core continue searching free memory after it.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The root cause is an unnecessary check to Size parameter in function
AdjustMemoryS(). It will cause one standalone free page (happen to have
Guard page around) in the free memory list cannot be allocated, even if
the requested memory size is less than a page.
//
// At least one more page needed for Guard page.
//
if (Size < (SizeRequested + EFI_PAGES_TO_SIZE (1))) {
return 0;
}
The following code in the same function actually covers above check
implicitly. So the fix is simply removing above check.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Considering following scenario (both NX memory protection and heap guard
are enabled):
1. Allocate 3 pages. The attributes of adjacent memory pages will be
|NOT-PRESENT| present | present | present |NOT-PRESENT|
2. Free the middle page. The attributes of adjacent memory pages should be
|NOT-PRESENT| present |NOT-PRESENT| present |NOT-PRESENT|
But the NX feature will overwrite the attributes of middle page. So it
looks still like below, which is wrong.
|NOT-PRESENT| present | PRESENT | present |NOT-PRESENT|
The solution is checking the first and/or last page of a memory block to be
marked as NX, and skipping them if they are Guard pages.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This issue is a regression one caused by a patch at
425d25699b
That fix didn't take the 0 page to free into account, which still
needs to call UnsetGuardPage() even no memory needs to free.
The fix is just moving the calling of UnsetGuardPage() to the place
right after calling AdjustMemoryF().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This hole will cause page fault randomly. The root cause is that Guard
page, which is just freed back to page pool but not yet cleared not-
present attribute, will be allocated right away by internal function
CoreFreeMemoryMapStack(). The solution to this issue is to clear the
not-present attribute for freed Guard page before doing any free
operation, instead of after those operation.
The reason we didn't do this before is due to the fact that manipulating
page attributes might cause memory allocation action which would cause a
dead lock inside a memory allocation/free operation. So we always set or
unset Guard page outside the memory lock. After a thorough analysis, we
believe clearing a Guard page will not cause memory allocation because
memory we're to manipulate was already manipulated before for sure.
Therefore there should be no memory allocation occurring in this
situation.
Since we cleared Guard page not-present attribute before freeing instead
of after freeing, the debug code to clear freed memory can now be restored
to its original way (aka no checking and bypassing Guard page).
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Three issues addressed here:
a. Make NX memory protection and heap guard to be compatible
The solution is to check PcdDxeNxMemoryProtectionPolicy in Heap Guard to
see if the free memory should be set to NX, and set the Guard page to NX
before it's freed back to memory pool. This can solve the issue which NX
setting would be overwritten by Heap Guard feature in certain
configuration.
b. Returned pool address was not 8-byte aligned sometimes
This happened only when BIT7 is not set in PcdHeapGuardPropertyMask. Since
8-byte alignment is UEFI spec required, letting allocated pool adjacent to
tail guard page cannot be guaranteed.
c. NULL address handling due to allocation failure
When allocation failure, normally a NULL will be returned. But Heap Guard
code will still try to adjust the starting address of it, which will cause
a non-NULL pointer returned.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
One issue is that macros defined in HeapGuard.h
GUARD_HEAP_TYPE_PAGE
GUARD_HEAP_TYPE_POOL
doesn't match the definition of PCD PcdHeapGuardPropertyMask in
MdeModulePkg.dec. This patch fixed it by exchanging the BIT0 and BIT1
of them.
Another is that method AdjustMemoryF() will return a bigger NumberOfPages than
the value passed in. This is caused by counting twice of a shared Guard page
which can be used for both head and tail Guard of the memory before it and
after it. This happens only when partially freeing just one page in the middle
of a bunch of allocated pages. The freed page should be turned into a new
Guard page.
Cc: Jie Lin <jie.lin@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Once the paging capabilities were filtered out, there might be some adjacent entries
sharing the same capabilities. It's recommended to merge those entries for the OS
compatibility purpose.
This patch makes use of existing method MergeMemoryMap() to do it. This is done by
simply turning this method from static to extern, and call it after filter code.
This patch is related to an issue described at
https://bugzilla.tianocore.org/show_bug.cgi?id=753
This patch is also passed test of booting follow OSs:
Windows 10
Windows Server 2016
Fedora 26
Fedora 25
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Some OSs will treat EFI_MEMORY_DESCRIPTOR.Attribute as really
set attributes and change memory paging attribute accordingly.
But current EFI_MEMORY_DESCRIPTOR.Attribute is assigned by
value from Capabilities in GCD memory map. This might cause
boot problems. Clearing all paging related capabilities can
workaround it. The code added in this patch is supposed to
be removed once the usage of EFI_MEMORY_DESCRIPTOR.Attribute
is clarified in UEFI spec and adopted by both EDK-II Core and
all supported OSs.
Laszlo did a thorough test on OVMF emulated platform. The details
can be found at
https://bugzilla.tianocore.org/show_bug.cgi?id=753#c10
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
In the method DumpGuardedMemoryBitmap() and SetAllGuardPages(), the code
didn't check if the global mMapLevel is legal value or not, which leaves
a logic hole causing potential array overflow in code followed.
This patch adds sanity check before any array reference in those methods.
Cc: Wu Hao <hao.a.wu@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Wu Hao <hao.a.wu@intel.com>
The build error is introduced by following check in:
2930ef9809235a4490c8
The Visual Studio older than 2015 doesn't support constant integer
in binary format (0bxxx). This patch changes them to BIT macro to
fix it. This patch also cleans up coding style about unmatched
comment for return value.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Bi Dandan <dandan.bi@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This feature makes use of paging mechanism to add a hidden (not present)
page just before and after the allocated memory block. If the code tries
to access memory outside of the allocated part, page fault exception will
be triggered.
This feature is controlled by three PCDs:
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType
BIT0 and BIT1 of PcdHeapGuardPropertyMask can be used to enable or disable
memory guard for page and pool respectively. PcdHeapGuardPoolType and/or
PcdHeapGuardPageType are used to enable or disable guard for specific type
of memory. For example, we can turn on guard only for EfiBootServicesData
and EfiRuntimeServicesData by setting the PCD with value 0x50.
Pool memory is not ususally integer multiple of one page, and is more likely
less than a page. There's no way to monitor the overflow at both top and
bottom of pool memory. BIT7 of PcdHeapGuardPropertyMask is used to control
how to position the head of pool memory so that it's easier to catch memory
overflow in memory growing direction or in decreasing direction.
Note1: Turning on heap guard, especially pool guard, will introduce too many
memory fragments. Windows 10 has a limitation in its boot loader, which
accepts at most 512 memory descriptors passed from BIOS. This will prevent
Windows 10 from booting if heap guard is enabled. The latest Linux
distribution with grub boot loader has no such issue. Normally it's not
recommended to enable this feature in production build of BIOS.
Note2: Don't enable this feature for NT32 emulation platform which doesn't
support paging.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
One of issue caused by enabling NULL pointer detection is that some PCI
device OptionROM, binary drivers and binary OS boot loaders may have NULL
pointer access bugs, which will prevent BIOS from booting and is almost
impossible to fix. BIT7 of PCD PcdNullPointerDetectionPropertyMask is used
as a workaround to indicate BIOS to disable NULL pointer detection right
after event gEfiEndOfDxeEventGroupGuid, and then let boot continue.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
When double free pages by FreePages() or allocate allocated pages by
AllocatePages() with AllocateAddress type, the code will print debug
message "ConvertPages: Incompatible memory types", but the debug
message is not very obvious for the error paths by FreePages() or
AllocatePages().
Refer https://lists.01.org/pipermail/edk2-devel/2017-August/013075.html
for the discussion.
This patch is to enhance the debug message for the error paths by
FreePages() or AllocatePages.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
When attempting to perform page allocations using AllocateAddress, we
fail to check whether the entire region is free before splitting the
region. This may lead to memory being leaked further into the routine,
when it turns out that one of the memory map entries intersected by the
region is already occupied. In this case, prior conversions are not rolled
back.
For instance, starting from this situation
0x000040000000-0x00004007ffff [ConventionalMemory ]
0x000040080000-0x00004009ffff [Boot Data ]
0x0000400a0000-0x000047ffffff [ConventionalMemory ]
a failed EfiLoaderData allocation @ 0x40000000 that covers the BootData
region will fail, but leave the first part of the allocation converted,
so we end up with
0x000040000000-0x00004007ffff [Loader Data ]
0x000040080000-0x00004009ffff [Boot Data ]
0x0000400a0000-0x000047ffffff [ConventionalMemory ]
even though the AllocatePages() call returned an error.
So let's check beforehand that AllocateAddress allocations are covered
by a single memory map entry, so that it either succeeds or fails
completely, rather than leaking allocations.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Star Zeng <star.zeng@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=370
Use GLOBAL_REMOVE_IF_UNREFERENCED for some memory profile global variables,
then their symbols could be removed when memory profile is disabled.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Remove the local definitions for the default and runtime page allocation
granularity macros, and switch to the new MdePkg versions.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
This implements a DXE memory protection policy that ensures that regions
that don't require executable permissions are mapped with the non-exec
attribute set.
First of all, it iterates over all entries in the UEFI memory map, and
removes executable permissions according to the configured DXE memory
protection policy, as recorded in PcdDxeNxMemoryProtectionPolicy.
Secondly, it sets or clears the non-executable attribute when allocating
or freeing pages, both for page based or pool based allocations.
Note that this complements the image protection facility, which applies
strict permissions to BootServicesCode/RuntimeServicesCode regions when
the section alignment allows it. The memory protection configured by this
patch operates on non-code regions only.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
In preparation of adding memory permission attribute management to
the pool allocator, split off the locking of the pool metadata into
a separate lock. This is an improvement in itself, given that pool
allocations can only interfere with the page allocation bookkeeping
if pool pages are allocated or released. But it is also required to
ensure that the permission attribute management does not deadlock,
given that it may trigger page table splits leading to additional
page tables being allocated.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Code logic ensures that the pointer 'DriverInfoData' will not be NULL when
it is used.
Add ASSERT as warning for case that will not happen.
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Code logic ensures that both pointers 'DriverInfoData' and 'AllocInfoData'
will not be NULL when they are used.
Add ASSERTs as warning for cases that will not happen.
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
1. Implement include GetRecordingState/SetRecordingState/Record for
memory profile protocol.
2. Consume PcdMemoryProfilePropertyMask to support disable recording
at the start.
3. Consume PcdMemoryProfileDriverPath to control which drivers need
memory profile data.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Use 128 bytes as the start size region to be same to previous one.
64 bytes is small as the first range. On X64 arch, POOL_OVERHEAD
takes 40 bytes, the pool data less than 24 bytes can be fit into
it. But, the real allocation is few that can't reduce its free pool
link list. And, the second range (64~128) has more allocation
that also increases the free pool link list of the first range.
Then, the link list will become longer and longer. When LinkList
check enable in DEBUG tip, the long link list will bring the
additional overhead and bad performance. Here is the performance
data collected in our X64 platform with DEBUG enable.
64 byte: 22 seconds in BDS phase
128 byte: 19.6 seconds in BDS phase
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
It can improve profile performance, especially when
PcdMemoryProfileMemoryType configured without EfiBootServicesData.
CoreUpdateProfile() can return quickly, but not depend on the further
code to find the buffer not recorded and then return.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Current MemoryAttributesTable will be installed on ReadyToBoot event
at TPL_NOTIFY level, it maybe incorrect when PcdHiiOsRuntimeSupport
= TRUE as HiiDatabaseDxe will have runtime memory allocation for HII
OS runtime support on and after ReadyToBoot. The issue was exposed at
http://article.gmane.org/gmane.comp.bios.edk2.devel/10125.
To make sure the correctness of MemoryAttributesTable, this patch is
to enhance MemoryAttributesTable installation to install
MemoryAttributesTable on ReadyToBoot event at TPL_CALLBACK - 1 level
to make sure it is at the last of ReadyToBoot event, and also hook
runtime memory allocation after ReadyToBoot.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The following patch for MemoryAttributesTable will need the memory type.
And CoreUpdateProfile() can also use the memory type for check.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Check for Type AllocateAddress,
if NumberOfPages is 0 or
if (NumberOfPages << EFI_PAGE_SHIFT) is above MAX_ADDRESS or
if (Start + NumberOfBytes) rolls over 0 or
if Start is above MAX_ADDRESS or
if End is above MAX_ADDRESS,
return EFI_NOT_FOUND.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
At the end of CoreFreePoolI(), the check to see if it is a specific
memory type should also cover OEM reserved memory type.
It was missed when adding OEM reserved memory type support at R17460.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
GCD Range is byte address. EFI memory range is page address. To make sure
GCD range is converted to EFI memory range, the following things are added:
1. Merge adjacent GCD range first.
2. Add ASSERT check on GCD range alignment.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17813 6f19259b-4bc3-4df7-8a09-765794883524
Move the definitions of EFI_ACPI_RUNTIME_PAGE_ALLOCATION_ALIGNMENT and
DEFAULT_PAGE_ALLOCATION to DxeMain.h to make them available explicitly
to all parts of DxeCore.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: "Yao, Jiewen" <Jiewen.Yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17811 6f19259b-4bc3-4df7-8a09-765794883524
UEFI2.5 spec, GetMemoryMap(), says:
Attribute: Attributes of the memory region that describe the bit mask
of capabilities for that memory region, and not necessarily the current
settings for that memory region.
But, GetMemoryMap() implementation doesn't append memory capabilities
for MMIO and Reserved memory range. This will break UEFI2.5 Properties
Table feature, because Properties Table need return EFI_MEMORY_RO or
EFI_MEMORY_XP capabilities for OS.
This patch appends memory capabilities for those memory range.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17703 6f19259b-4bc3-4df7-8a09-765794883524
DescEnd will be clipped for alignment in CoreFindFreePagesI, and it
may fall below DescStart, when alignment is more than 16KB (included)
and both DescStart and original DescEnd fall into a single range of
such alignment. This results in a huge size (Negative number in
unsigned type) for this descriptor, fulfilling the allocation
requirement but failing to run ConvertPages; at last it causes
occasional failure of AllocatePages.
A simple comparison is added to ensure we would never get a negative
number.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17575 6f19259b-4bc3-4df7-8a09-765794883524
ARM toolchain raises the build error: "enumerated type mixed with
another type".
To fix the issue, typecase can be used like below.
- return EfiMaxMemoryType + 1;
+ return (EFI_MEMORY_TYPE)(EfiMaxMemoryType + 1);
But to eliminate the confusion, update the return type of
GetProfileMemoryIndex() from EFI_MEMORY_TYPE to UINTN.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17535 6f19259b-4bc3-4df7-8a09-765794883524