Currently, there are multiple issues when page or pool guards are
allocated for runtime memory regions that are aligned to
non-EFI_PAGE_SIZE alignments. Multiple other issues have been fixed for
these same systems (notably ARM64 which has a 64k runtime page
allocation granularity) recently. The heap guard system is only built to
support 4k guard pages and 4k alignment.
Today, the address returned to a caller of AllocatePages will not be
aligned correctly to the runtime page allocation granularity, because
the heap guard system does not take non-4k alignment requirements into
consideration.
However, even with this bug fixed, the Memory Allocation Table cannot be
produced and an OS with a larger than 4k page granularity will not have
aligned memory regions because the guard pages are reported as part of
the same memory allocation. So what would have been, on an ARM64 system,
a 64k runtime memory allocation is actually a 72k memory allocation as
tracked by the Page.c code because the guard pages are tracked as part
of the same allocation. This is a core function of the current heap
guard architecture.
This could also be fixed with rearchitecting the heap guard system to
respect alignment requirements and shift the guard pages inside of the
outer rounded allocation or by having guard pages be the runtime
granularity. Both of these approaches have issues. In the former case,
we break UEFI spec 2.10 section 2.3.6 for AARCH64, which states that
each 64k page for runtime memory regions may not have mixed memory
attributes, which pushing the guard pages inside would create. In the
latter case, an immense amount of memory is wasted to support such large
guard pages, and with pool guard many systems could not support an
additional 128k allocation for all runtime memory.
The simpler and safer solution is to disallow page and pool guards for
runtime memory allocations for systems that have a runtime granularity
greater than the EFI_PAGE_SIZE (4k). The usefulness of such guards is
limited, as OSes do not map guard pages today, so there is only boot
time protection of these ranges. This also prevents other bugs from
being exposed by using guards for regions that have a non-4k alignment
requirement, as again, multiple have cropped up because the heap guard
system was not built to support it.
This patch adds both a static assert to ensure that either the runtime
granularity is the EFI_PAGE_SIZE or that the PCD bits are not set to
enable heap guard for runtime memory regions. It also adds a check in
the page and pool allocation system to ensure that at runtime we are not
allocating a runtime region and attempt to guard it (the PCDs are close
to being removed in favor of dynamic heap guard configurations).
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4674
Github PR: https://github.com/tianocore/edk2/pull/5382
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Per the UEFI spec 2.10, section 2.3.6 (for the AARCH64 arch, other
architectures in section two confirm the same) the memory types that
need runtime page allocation granularity are EfiReservedMemoryType,
EfiACPIMemoryNVS, EfiRuntimeServicesCode, and EfiRuntimeServicesData.
However, legacy code was setting runtime page allocation granularity for
EfiACPIReclaimMemory and not EfiReservedMemoryType. This patch fixes
that error.
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Suggested-by: Ard Biesheuvel <ardb+tianocore@kernel.org>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Currently, HeapGuard, when in the GuardAlignedToTail mode, assumes that
the pool head has been allocated in the first page of memory that was
allocated. This is not the case for ARM64 platforms when allocating
runtime pools, as RUNTIME_PAGE_ALLOCATION_GRANULARITY is 64k, unlike
X64, which has RUNTIME_PAGE_ALLOCATION_GRANULARITY as 4k.
When a runtime pool is allocated on ARM64, the minimum number of pages
allocated is 16, to match the runtime granularity. When a small pool is
allocated and GuardAlignedToTail is true, HeapGuard instructs the pool
head to be placed as (MemoryAllocated + EFI_PAGES_TO_SIZE(Number of Pages)
- SizeRequiredForPool).
This gives this scenario:
|Head Guard|Large Free Number of Pages|PoolHead|TailGuard|
When this pool goes to be freed, HeapGuard instructs the pool code to
free from (PoolHead & ~EFI_PAGE_MASK). However, this assumes that the
PoolHead is in the first page allocated, which as shown above is not true
in this case. For the 4k granularity case (i.e. where the correct number of
pages are allocated for this pool), this logic does work.
In this failing case, HeapGuard then instructs the pool code to free 16
(or more depending) pages from the page the pool head was allocated on,
which as seen above means we overrun the pool and attempt to free memory
far past the pool. We end up running into the tail guard and getting an
access flag fault.
This causes ArmVirtQemu to fail to boot with an access flag fault when
GuardAlignedToTail is set to true (and pool guard enabled for runtime
memory). It should also cause all ARM64 platforms to fail in this
configuration, for exactly the same reason, as this is core code making
the assumption.
This patch removes HeapGuard's assumption that the pool head is allocated
on the first page and instead undoes the same logic that HeapGuard did
when allocating the pool head in the first place.
With this patch in place, ArmVirtQemu boots with GuardAlignedToTail
set to true (and when it is false, also).
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4521
Github PR: https://github.com/tianocore/edk2/pull/4731
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4315
MemoryType of EfiUnacceptedMemoryType should not be allocated in
AllocatePool. Instead it should return EFI_INVALID_PARAMETER.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Michael Roth <michael.roth@amd.com>
Reported-by: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Min Xu <min.m.xu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3737
Apply uncrustify changes to .c/.h files in the MdeModulePkg package
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
which is illegal access to memory which has been freed. The principle
behind is similar to pool guard feature, that is we'll turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed.
This also implies that, once a page is allocated and freed, it cannot
be re-allocated. This will bring another issue, which is that there's
risk that memory space will be used out. To address it, the memory
service add logic to put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages which haven been freed.
This feature brings another problem is that memory map descriptors will
be increased enormously (200+ -> 2000+). One of change in this patch
is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
freed pages back into the memory map. Now the number can stay at around
510.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This issue is a regression one caused by a patch at
425d25699b
That fix didn't take the 0 page to free into account, which still
needs to call UnsetGuardPage() even no memory needs to free.
The fix is just moving the calling of UnsetGuardPage() to the place
right after calling AdjustMemoryF().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This hole will cause page fault randomly. The root cause is that Guard
page, which is just freed back to page pool but not yet cleared not-
present attribute, will be allocated right away by internal function
CoreFreeMemoryMapStack(). The solution to this issue is to clear the
not-present attribute for freed Guard page before doing any free
operation, instead of after those operation.
The reason we didn't do this before is due to the fact that manipulating
page attributes might cause memory allocation action which would cause a
dead lock inside a memory allocation/free operation. So we always set or
unset Guard page outside the memory lock. After a thorough analysis, we
believe clearing a Guard page will not cause memory allocation because
memory we're to manipulate was already manipulated before for sure.
Therefore there should be no memory allocation occurring in this
situation.
Since we cleared Guard page not-present attribute before freeing instead
of after freeing, the debug code to clear freed memory can now be restored
to its original way (aka no checking and bypassing Guard page).
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
One issue is that macros defined in HeapGuard.h
GUARD_HEAP_TYPE_PAGE
GUARD_HEAP_TYPE_POOL
doesn't match the definition of PCD PcdHeapGuardPropertyMask in
MdeModulePkg.dec. This patch fixed it by exchanging the BIT0 and BIT1
of them.
Another is that method AdjustMemoryF() will return a bigger NumberOfPages than
the value passed in. This is caused by counting twice of a shared Guard page
which can be used for both head and tail Guard of the memory before it and
after it. This happens only when partially freeing just one page in the middle
of a bunch of allocated pages. The freed page should be turned into a new
Guard page.
Cc: Jie Lin <jie.lin@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This feature makes use of paging mechanism to add a hidden (not present)
page just before and after the allocated memory block. If the code tries
to access memory outside of the allocated part, page fault exception will
be triggered.
This feature is controlled by three PCDs:
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType
BIT0 and BIT1 of PcdHeapGuardPropertyMask can be used to enable or disable
memory guard for page and pool respectively. PcdHeapGuardPoolType and/or
PcdHeapGuardPageType are used to enable or disable guard for specific type
of memory. For example, we can turn on guard only for EfiBootServicesData
and EfiRuntimeServicesData by setting the PCD with value 0x50.
Pool memory is not ususally integer multiple of one page, and is more likely
less than a page. There's no way to monitor the overflow at both top and
bottom of pool memory. BIT7 of PcdHeapGuardPropertyMask is used to control
how to position the head of pool memory so that it's easier to catch memory
overflow in memory growing direction or in decreasing direction.
Note1: Turning on heap guard, especially pool guard, will introduce too many
memory fragments. Windows 10 has a limitation in its boot loader, which
accepts at most 512 memory descriptors passed from BIOS. This will prevent
Windows 10 from booting if heap guard is enabled. The latest Linux
distribution with grub boot loader has no such issue. Normally it's not
recommended to enable this feature in production build of BIOS.
Note2: Don't enable this feature for NT32 emulation platform which doesn't
support paging.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Remove the local definitions for the default and runtime page allocation
granularity macros, and switch to the new MdePkg versions.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
This implements a DXE memory protection policy that ensures that regions
that don't require executable permissions are mapped with the non-exec
attribute set.
First of all, it iterates over all entries in the UEFI memory map, and
removes executable permissions according to the configured DXE memory
protection policy, as recorded in PcdDxeNxMemoryProtectionPolicy.
Secondly, it sets or clears the non-executable attribute when allocating
or freeing pages, both for page based or pool based allocations.
Note that this complements the image protection facility, which applies
strict permissions to BootServicesCode/RuntimeServicesCode regions when
the section alignment allows it. The memory protection configured by this
patch operates on non-code regions only.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
In preparation of adding memory permission attribute management to
the pool allocator, split off the locking of the pool metadata into
a separate lock. This is an improvement in itself, given that pool
allocations can only interfere with the page allocation bookkeeping
if pool pages are allocated or released. But it is also required to
ensure that the permission attribute management does not deadlock,
given that it may trigger page table splits leading to additional
page tables being allocated.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
1. Implement include GetRecordingState/SetRecordingState/Record for
memory profile protocol.
2. Consume PcdMemoryProfilePropertyMask to support disable recording
at the start.
3. Consume PcdMemoryProfileDriverPath to control which drivers need
memory profile data.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Use 128 bytes as the start size region to be same to previous one.
64 bytes is small as the first range. On X64 arch, POOL_OVERHEAD
takes 40 bytes, the pool data less than 24 bytes can be fit into
it. But, the real allocation is few that can't reduce its free pool
link list. And, the second range (64~128) has more allocation
that also increases the free pool link list of the first range.
Then, the link list will become longer and longer. When LinkList
check enable in DEBUG tip, the long link list will bring the
additional overhead and bad performance. Here is the performance
data collected in our X64 platform with DEBUG enable.
64 byte: 22 seconds in BDS phase
128 byte: 19.6 seconds in BDS phase
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
It can improve profile performance, especially when
PcdMemoryProfileMemoryType configured without EfiBootServicesData.
CoreUpdateProfile() can return quickly, but not depend on the further
code to find the buffer not recorded and then return.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Current MemoryAttributesTable will be installed on ReadyToBoot event
at TPL_NOTIFY level, it maybe incorrect when PcdHiiOsRuntimeSupport
= TRUE as HiiDatabaseDxe will have runtime memory allocation for HII
OS runtime support on and after ReadyToBoot. The issue was exposed at
http://article.gmane.org/gmane.comp.bios.edk2.devel/10125.
To make sure the correctness of MemoryAttributesTable, this patch is
to enhance MemoryAttributesTable installation to install
MemoryAttributesTable on ReadyToBoot event at TPL_CALLBACK - 1 level
to make sure it is at the last of ReadyToBoot event, and also hook
runtime memory allocation after ReadyToBoot.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The following patch for MemoryAttributesTable will need the memory type.
And CoreUpdateProfile() can also use the memory type for check.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
At the end of CoreFreePoolI(), the check to see if it is a specific
memory type should also cover OEM reserved memory type.
It was missed when adding OEM reserved memory type support at R17460.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
This patch changes the allocation logic for the pool allocator to only
allocate additional pages if the requested allocation cannot be fulfilled
from the current bin or any of the larger ones. If there are larger blocks
available, they will be used to serve the allocation, and the remainder will
be carved up into smaller blocks using the existing carving up logic.
Note that all pool sizes are a multiple of the smallest pool size, so it is
guaranteed that the remainder will be carved up without spilling. Due to the
exponential nature of the pool sizes, the amount of work is logarithmic in
the size of the available block.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17015 6f19259b-4bc3-4df7-8a09-765794883524
In preparation of the next patch, that serves allocations from higher-up
bins if the current bin is depleted, this patch updates the carving up
strategy to populate the largest bins first. To ensure that there will
always be an allocation of the appropriate size made, the current allocation
request is served first from the newly allocated memory region.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17014 6f19259b-4bc3-4df7-8a09-765794883524
In preparation of making the pool code capable of serving allocations
from higher-up bins, update the free path to traverse a candidate page
by following the index of POOL_FREE header instead of duplicating the
carving logic that was used at page allocation time. This allows chunks
to be split into smaller ones, where one can be returned to serve the
allocation, and the other stored in a smaller bin for later use.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17013 6f19259b-4bc3-4df7-8a09-765794883524
The existing linear mapping between allocation size and pool index
does not scale when moving to a 64 KB granularity or beyond. With
a granularity of 64 KB, 2048 (!) bins will be created for each
memory type, each differing 32 bytes in size with the next one.
Instead, introduce an exponential scheme where each bin size is
the sum of the two previous ones.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17012 6f19259b-4bc3-4df7-8a09-765794883524
After fixing the sanity check on the alignment of the runtime regions
in SVN revision #16630 ("MdeModulePkg/DxeMain: Fix wrong sanity check
in CoreTerminateMemoryMap()"), it is no longer possible to define a
runtime allocation alignment that is different from the boot time
allocation alignment.
For instance, #defining the following in MdeModulePkg/Core/Dxe/Mem/Imem.h
will hit the ASSERT () in MdeModulePkg/Core/Dxe/Mem/Page.c:1798
#define EFI_ACPI_RUNTIME_PAGE_ALLOCATION_ALIGNMENT (SIZE_64KB)
#define DEFAULT_PAGE_ALLOCATION (EFI_PAGE_SIZE)
(which is needed for 64-bit ARM to adhere to the Server Base Boot
Requirements [SBBR], which stipulates that all runtime memory regions
should be naturally aligned multiples of 64 KB)
This patch fixes this use case by ensuring that the backing for the memory
pools is allocated in appropriate chunks for the memory type.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17011 6f19259b-4bc3-4df7-8a09-765794883524
ARM Toolchain raises a warning/error when an integer is used instead
of a enum value.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Olivier Martin <Olivier.Martin@arm.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@16501 6f19259b-4bc3-4df7-8a09-765794883524
EFI_SIGNATURE_16 -> SIGNATURE_16
EFI_SIGNATURE_32 -> SIGNATURE_32
EFI_SIGNATURE_64 -> SIGNATURE_64
EFI_FIELD_OFFSET -> OFFSET_OF
EFI_MAX_BIT -> MAX_BIT
EFI_MAX_ADDRESS -> MAX_ADDRESS
These macros are not defined in UEFI spec. It makes more sense to use the equivalent macros in Base.h to avoid alias.
git-svn-id: https://edk2.svn.sourceforge.net/svnroot/edk2/trunk/edk2@7056 6f19259b-4bc3-4df7-8a09-765794883524
1. Line 79:Use the pre-initialized global variable mPoolHeadList = INITIALIZE_LIST_HEAD_VARIABLE (mPoolHeadList) to remove the statement in line 102
2. Line 337: The debug print statement: “Addr = %x” should change to “Addr = %p” since the expected Buffer is VOID *; How about “(len %x) %,d” ? The Size & Pool->Used belong to type UINTN? Cast it to UINT64 and use %lx
3.Line 413, 418, 425, 477: Use “Buffer != NULL” instead of “NULL != Buffer”
4. Line 451: The debug print statement: “FreePool = %x” should change to FreePool = %p” since Head->Data is pointer; How about “(len %x) %,d” ? The Head->Size& Pool->Used belong to type UINTN? Cast it to UINT64 and use %lx
git-svn-id: https://edk2.svn.sourceforge.net/svnroot/edk2/trunk/edk2@5916 6f19259b-4bc3-4df7-8a09-765794883524
Minor cleanup the coding style: #include <DxeMain.h> should be changed to #include "DxeMain.h" since "DxeMain.h" is not pubic header fie.
git-svn-id: https://edk2.svn.sourceforge.net/svnroot/edk2/trunk/edk2@5742 6f19259b-4bc3-4df7-8a09-765794883524