Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
which is illegal access to memory which has been freed. The principle
behind is similar to pool guard feature, that is we'll turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed.
This also implies that, once a page is allocated and freed, it cannot
be re-allocated. This will bring another issue, which is that there's
risk that memory space will be used out. To address it, the memory
service add logic to put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages which haven been freed.
This feature brings another problem is that memory map descriptors will
be increased enormously (200+ -> 2000+). One of change in this patch
is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
freed pages back into the memory map. Now the number can stay at around
510.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This issue is a regression one caused by a patch at
425d25699b
That fix didn't take the 0 page to free into account, which still
needs to call UnsetGuardPage() even no memory needs to free.
The fix is just moving the calling of UnsetGuardPage() to the place
right after calling AdjustMemoryF().
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
This hole will cause page fault randomly. The root cause is that Guard
page, which is just freed back to page pool but not yet cleared not-
present attribute, will be allocated right away by internal function
CoreFreeMemoryMapStack(). The solution to this issue is to clear the
not-present attribute for freed Guard page before doing any free
operation, instead of after those operation.
The reason we didn't do this before is due to the fact that manipulating
page attributes might cause memory allocation action which would cause a
dead lock inside a memory allocation/free operation. So we always set or
unset Guard page outside the memory lock. After a thorough analysis, we
believe clearing a Guard page will not cause memory allocation because
memory we're to manipulate was already manipulated before for sure.
Therefore there should be no memory allocation occurring in this
situation.
Since we cleared Guard page not-present attribute before freeing instead
of after freeing, the debug code to clear freed memory can now be restored
to its original way (aka no checking and bypassing Guard page).
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
One issue is that macros defined in HeapGuard.h
GUARD_HEAP_TYPE_PAGE
GUARD_HEAP_TYPE_POOL
doesn't match the definition of PCD PcdHeapGuardPropertyMask in
MdeModulePkg.dec. This patch fixed it by exchanging the BIT0 and BIT1
of them.
Another is that method AdjustMemoryF() will return a bigger NumberOfPages than
the value passed in. This is caused by counting twice of a shared Guard page
which can be used for both head and tail Guard of the memory before it and
after it. This happens only when partially freeing just one page in the middle
of a bunch of allocated pages. The freed page should be turned into a new
Guard page.
Cc: Jie Lin <jie.lin@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
This feature makes use of paging mechanism to add a hidden (not present)
page just before and after the allocated memory block. If the code tries
to access memory outside of the allocated part, page fault exception will
be triggered.
This feature is controlled by three PCDs:
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType
BIT0 and BIT1 of PcdHeapGuardPropertyMask can be used to enable or disable
memory guard for page and pool respectively. PcdHeapGuardPoolType and/or
PcdHeapGuardPageType are used to enable or disable guard for specific type
of memory. For example, we can turn on guard only for EfiBootServicesData
and EfiRuntimeServicesData by setting the PCD with value 0x50.
Pool memory is not ususally integer multiple of one page, and is more likely
less than a page. There's no way to monitor the overflow at both top and
bottom of pool memory. BIT7 of PcdHeapGuardPropertyMask is used to control
how to position the head of pool memory so that it's easier to catch memory
overflow in memory growing direction or in decreasing direction.
Note1: Turning on heap guard, especially pool guard, will introduce too many
memory fragments. Windows 10 has a limitation in its boot loader, which
accepts at most 512 memory descriptors passed from BIOS. This will prevent
Windows 10 from booting if heap guard is enabled. The latest Linux
distribution with grub boot loader has no such issue. Normally it's not
recommended to enable this feature in production build of BIOS.
Note2: Don't enable this feature for NT32 emulation platform which doesn't
support paging.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Remove the local definitions for the default and runtime page allocation
granularity macros, and switch to the new MdePkg versions.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
This implements a DXE memory protection policy that ensures that regions
that don't require executable permissions are mapped with the non-exec
attribute set.
First of all, it iterates over all entries in the UEFI memory map, and
removes executable permissions according to the configured DXE memory
protection policy, as recorded in PcdDxeNxMemoryProtectionPolicy.
Secondly, it sets or clears the non-executable attribute when allocating
or freeing pages, both for page based or pool based allocations.
Note that this complements the image protection facility, which applies
strict permissions to BootServicesCode/RuntimeServicesCode regions when
the section alignment allows it. The memory protection configured by this
patch operates on non-code regions only.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
In preparation of adding memory permission attribute management to
the pool allocator, split off the locking of the pool metadata into
a separate lock. This is an improvement in itself, given that pool
allocations can only interfere with the page allocation bookkeeping
if pool pages are allocated or released. But it is also required to
ensure that the permission attribute management does not deadlock,
given that it may trigger page table splits leading to additional
page tables being allocated.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
1. Implement include GetRecordingState/SetRecordingState/Record for
memory profile protocol.
2. Consume PcdMemoryProfilePropertyMask to support disable recording
at the start.
3. Consume PcdMemoryProfileDriverPath to control which drivers need
memory profile data.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Use 128 bytes as the start size region to be same to previous one.
64 bytes is small as the first range. On X64 arch, POOL_OVERHEAD
takes 40 bytes, the pool data less than 24 bytes can be fit into
it. But, the real allocation is few that can't reduce its free pool
link list. And, the second range (64~128) has more allocation
that also increases the free pool link list of the first range.
Then, the link list will become longer and longer. When LinkList
check enable in DEBUG tip, the long link list will bring the
additional overhead and bad performance. Here is the performance
data collected in our X64 platform with DEBUG enable.
64 byte: 22 seconds in BDS phase
128 byte: 19.6 seconds in BDS phase
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
It can improve profile performance, especially when
PcdMemoryProfileMemoryType configured without EfiBootServicesData.
CoreUpdateProfile() can return quickly, but not depend on the further
code to find the buffer not recorded and then return.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Current MemoryAttributesTable will be installed on ReadyToBoot event
at TPL_NOTIFY level, it maybe incorrect when PcdHiiOsRuntimeSupport
= TRUE as HiiDatabaseDxe will have runtime memory allocation for HII
OS runtime support on and after ReadyToBoot. The issue was exposed at
http://article.gmane.org/gmane.comp.bios.edk2.devel/10125.
To make sure the correctness of MemoryAttributesTable, this patch is
to enhance MemoryAttributesTable installation to install
MemoryAttributesTable on ReadyToBoot event at TPL_CALLBACK - 1 level
to make sure it is at the last of ReadyToBoot event, and also hook
runtime memory allocation after ReadyToBoot.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
The following patch for MemoryAttributesTable will need the memory type.
And CoreUpdateProfile() can also use the memory type for check.
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
At the end of CoreFreePoolI(), the check to see if it is a specific
memory type should also cover OEM reserved memory type.
It was missed when adding OEM reserved memory type support at R17460.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
This patch changes the allocation logic for the pool allocator to only
allocate additional pages if the requested allocation cannot be fulfilled
from the current bin or any of the larger ones. If there are larger blocks
available, they will be used to serve the allocation, and the remainder will
be carved up into smaller blocks using the existing carving up logic.
Note that all pool sizes are a multiple of the smallest pool size, so it is
guaranteed that the remainder will be carved up without spilling. Due to the
exponential nature of the pool sizes, the amount of work is logarithmic in
the size of the available block.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17015 6f19259b-4bc3-4df7-8a09-765794883524
In preparation of the next patch, that serves allocations from higher-up
bins if the current bin is depleted, this patch updates the carving up
strategy to populate the largest bins first. To ensure that there will
always be an allocation of the appropriate size made, the current allocation
request is served first from the newly allocated memory region.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17014 6f19259b-4bc3-4df7-8a09-765794883524
In preparation of making the pool code capable of serving allocations
from higher-up bins, update the free path to traverse a candidate page
by following the index of POOL_FREE header instead of duplicating the
carving logic that was used at page allocation time. This allows chunks
to be split into smaller ones, where one can be returned to serve the
allocation, and the other stored in a smaller bin for later use.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17013 6f19259b-4bc3-4df7-8a09-765794883524
The existing linear mapping between allocation size and pool index
does not scale when moving to a 64 KB granularity or beyond. With
a granularity of 64 KB, 2048 (!) bins will be created for each
memory type, each differing 32 bytes in size with the next one.
Instead, introduce an exponential scheme where each bin size is
the sum of the two previous ones.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17012 6f19259b-4bc3-4df7-8a09-765794883524
After fixing the sanity check on the alignment of the runtime regions
in SVN revision #16630 ("MdeModulePkg/DxeMain: Fix wrong sanity check
in CoreTerminateMemoryMap()"), it is no longer possible to define a
runtime allocation alignment that is different from the boot time
allocation alignment.
For instance, #defining the following in MdeModulePkg/Core/Dxe/Mem/Imem.h
will hit the ASSERT () in MdeModulePkg/Core/Dxe/Mem/Page.c:1798
#define EFI_ACPI_RUNTIME_PAGE_ALLOCATION_ALIGNMENT (SIZE_64KB)
#define DEFAULT_PAGE_ALLOCATION (EFI_PAGE_SIZE)
(which is needed for 64-bit ARM to adhere to the Server Base Boot
Requirements [SBBR], which stipulates that all runtime memory regions
should be naturally aligned multiples of 64 KB)
This patch fixes this use case by ensuring that the backing for the memory
pools is allocated in appropriate chunks for the memory type.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17011 6f19259b-4bc3-4df7-8a09-765794883524
ARM Toolchain raises a warning/error when an integer is used instead
of a enum value.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Olivier Martin <Olivier.Martin@arm.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@16501 6f19259b-4bc3-4df7-8a09-765794883524
EFI_SIGNATURE_16 -> SIGNATURE_16
EFI_SIGNATURE_32 -> SIGNATURE_32
EFI_SIGNATURE_64 -> SIGNATURE_64
EFI_FIELD_OFFSET -> OFFSET_OF
EFI_MAX_BIT -> MAX_BIT
EFI_MAX_ADDRESS -> MAX_ADDRESS
These macros are not defined in UEFI spec. It makes more sense to use the equivalent macros in Base.h to avoid alias.
git-svn-id: https://edk2.svn.sourceforge.net/svnroot/edk2/trunk/edk2@7056 6f19259b-4bc3-4df7-8a09-765794883524
1. Line 79:Use the pre-initialized global variable mPoolHeadList = INITIALIZE_LIST_HEAD_VARIABLE (mPoolHeadList) to remove the statement in line 102
2. Line 337: The debug print statement: “Addr = %x” should change to “Addr = %p” since the expected Buffer is VOID *; How about “(len %x) %,d” ? The Size & Pool->Used belong to type UINTN? Cast it to UINT64 and use %lx
3.Line 413, 418, 425, 477: Use “Buffer != NULL” instead of “NULL != Buffer”
4. Line 451: The debug print statement: “FreePool = %x” should change to FreePool = %p” since Head->Data is pointer; How about “(len %x) %,d” ? The Head->Size& Pool->Used belong to type UINTN? Cast it to UINT64 and use %lx
git-svn-id: https://edk2.svn.sourceforge.net/svnroot/edk2/trunk/edk2@5916 6f19259b-4bc3-4df7-8a09-765794883524
Minor cleanup the coding style: #include <DxeMain.h> should be changed to #include "DxeMain.h" since "DxeMain.h" is not pubic header fie.
git-svn-id: https://edk2.svn.sourceforge.net/svnroot/edk2/trunk/edk2@5742 6f19259b-4bc3-4df7-8a09-765794883524