v2 changelog:
Check CurrentPeimHandle to check the matched PeimHandle.
Add check point to ShadowPeiCore based on PCD.
v1 changelog:
PeiCore LoadImage always shadow itself and PEIM on normal boot after
the physical memory is installed. On the emulator platform, the shadow
may be not necessary. To support such usage, new PCD PcdShadowPeimOnBoot
is introduced to specify whether loads PEIM in memory by default.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18125 6f19259b-4bc3-4df7-8a09-765794883524
to instead of AllocatePool() to ensure the data is clean for
the following consumption.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18087 6f19259b-4bc3-4df7-8a09-765794883524
that assumes the SMRAM reserved range is only at the end of the SMRAM descriptor.
//
// This range has reserved area, calculate the left free size
//
gSmmCorePrivate->SmramRanges[Index].PhysicalSize = SmramResRegion->SmramReservedStart - gSmmCorePrivate->SmramRanges[Index].CpuStart;
Imagine the following scenario where we just reserve the first page of the SMRAM range:
SMRAM Descriptor:
Start: 0x80000000
Size: 0x02000000
Reserved Range:
Start: 0x80000000
Size: 0x00001000
In this case the adjustment to the SMRAM range size yields zero: ReservedStart - SMRAM Start is 0x80000000 - 0x80000000 = 0.
So even though most of the range is still free the IPL code decides its unusable.
The problem comes from the email thread: [edk2] PiSmmIpl SMRAM Reservation Logic.
http://thread.gmane.org/gmane.comp.bios.tianocore.devel/15268
Also to follow the idea in the email thread, the patch is to
1. Keep only one copy of full SMRAM ranges in gSmmCorePrivate->SmramRanges,
split record for SmmConfiguration->SmramReservedRegions and SMM Core that
will be marked to be EFI_ALLOCATED in gSmmCorePrivate->SmramRanges.
2. Handle SmmConfiguration->SmramReservedRegions at beginning of, at end of,
in the middle of, or cross multiple SmramRanges.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18031 6f19259b-4bc3-4df7-8a09-765794883524
Add the check at DxeLoadCore() on MdeModulePkg\Core\DxeIplPeim\DxeLoad.c
to skip install the "gEfiMemoryTypeInformationGuid" hob if it is already
installed.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18018 6f19259b-4bc3-4df7-8a09-765794883524
GCD Range is byte address. EFI memory range is page address. To make sure
GCD range is converted to EFI memory range, the following things are added:
1. Merge adjacent GCD range first.
2. Add ASSERT check on GCD range alignment.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17813 6f19259b-4bc3-4df7-8a09-765794883524
The splitting of memory regions into code and data regions violates
architecture specific alignment rules by using a fixed alignment
of 4 KB. Replace it with EFI_ACPI_RUNTIME_PAGE_ALLOCATION_ALIGNMENT,
which is defined appropriately on each architecture.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: "Jaben Carsey" <jaben.carsey@intel.com>
Reviewed-by: "Yao, Jiewen" <Jiewen.Yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17812 6f19259b-4bc3-4df7-8a09-765794883524
Move the definitions of EFI_ACPI_RUNTIME_PAGE_ALLOCATION_ALIGNMENT and
DEFAULT_PAGE_ALLOCATION to DxeMain.h to make them available explicitly
to all parts of DxeCore.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: "Yao, Jiewen" <Jiewen.Yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17811 6f19259b-4bc3-4df7-8a09-765794883524
This removes the functions RevertRuntimeMemoryMap () and
DumpMemoryMap () which are not referenced anywhere in the code.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: "Yao, Jiewen" <Jiewen.Yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17808 6f19259b-4bc3-4df7-8a09-765794883524
UEFI Spec HTTP Boot Device Path, after retrieving the boot resource
information, the BootURI device path node will be updated to include
the BootURI information. It means the device path on the child handle
will be updated after the LoadFile() service is called.
To handle this case, DxeCore LoadImage() service is updated as the below:
1) Get Device handle based on Device Path
2) Call LoadFile() service (GetFileBufferByFilePath() API) to get Load File Buffer.
3) Retrieve DevicePath from Device handle
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17799 6f19259b-4bc3-4df7-8a09-765794883524
Child FV may be placed in parent FV image without process required. Then,
Child FV and parent FV will be in the same continuous space. For FileHandle,
FileHandleToVolume() function needs to find the best matched FV handle.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17704 6f19259b-4bc3-4df7-8a09-765794883524
UEFI2.5 spec, GetMemoryMap(), says:
Attribute: Attributes of the memory region that describe the bit mask
of capabilities for that memory region, and not necessarily the current
settings for that memory region.
But, GetMemoryMap() implementation doesn't append memory capabilities
for MMIO and Reserved memory range. This will break UEFI2.5 Properties
Table feature, because Properties Table need return EFI_MEMORY_RO or
EFI_MEMORY_XP capabilities for OS.
This patch appends memory capabilities for those memory range.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17703 6f19259b-4bc3-4df7-8a09-765794883524
And also SMM Ready To Boot.
The SMM Exit Boot Service protocol is to be published by the SMM
Foundation code to associate with EFI_EVENT_GROUP_EXIT_BOOT_SERVICES
to notify SMM driver that system enter exit boot services.
The SMM Legacy Boot protocol is to be published by the SMM
Foundation code to associate with EFI_EVENT_LEGACY_BOOT_GUID
to notify SMM driver that system enter legacy boot.
The SMM Ready To Boot protocol is to be published by the SMM
Foundation code to associate with EFI_EVENT_GROUP_READY_TO_BOOT
to notify SMM driver that system enter ready to boot.
After them, any SMM drivers can get protocol notify on what happened
in DXE phase, then there is no need to let each individual SMM driver
to register SMM Communication Handler for that.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17657 6f19259b-4bc3-4df7-8a09-765794883524
The correct logic should be:
- The SectionAlignment is got from Magic number.
- The Magic number is got from PE header and machine type.
The original code mix them.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <Jiewen.yao@intel.com>
Reviewed-by: "Ard Biesheuvel" <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17603 6f19259b-4bc3-4df7-8a09-765794883524
1. In PiSmmIpl.c, free FullSmramRanges at error condition.
2. Move pool and page management definitions and structures
from PiSmmCorePrivateData.h to PiSmmCore.h.
PiSmmCorePrivateData.h should be only used to share SMM_CORE_PRIVATE_DATA
between PiSmmCore and PiSmmIpl. Pool and page management definitions
and structures were moved from Pool.c and Page.c to PiSmmCorePrivateData.h
incorrectly for memory profile feature at EDK2 R16335 commit.
3. DumpSmramInfo() only used for memory profile, so move the declaration
into SmramProfileRecord.c.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17598 6f19259b-4bc3-4df7-8a09-765794883524
Use if (Image->Started) condition judgement before call to
UnregisterMemoryProfileImage() in CoreUnloadAndCloseImage().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17597 6f19259b-4bc3-4df7-8a09-765794883524
DescEnd will be clipped for alignment in CoreFindFreePagesI, and it
may fall below DescStart, when alignment is more than 16KB (included)
and both DescStart and original DescEnd fall into a single range of
such alignment. This results in a huge size (Negative number in
unsigned type) for this descriptor, fulfilling the allocation
requirement but failing to run ConvertPages; at last it causes
occasional failure of AllocatePages.
A simple comparison is added to ensure we would never get a negative
number.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <heyi.guo@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17575 6f19259b-4bc3-4df7-8a09-765794883524
The debug message is to print the current TPL and requested TPL value.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17543 6f19259b-4bc3-4df7-8a09-765794883524
ARM toolchain raises the build error: "enumerated type mixed with
another type".
To fix the issue, typecase can be used like below.
- return EfiMaxMemoryType + 1;
+ return (EFI_MEMORY_TYPE)(EfiMaxMemoryType + 1);
But to eliminate the confusion, update the return type of
GetProfileMemoryIndex() from EFI_MEMORY_TYPE to UINTN.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17535 6f19259b-4bc3-4df7-8a09-765794883524
This change will make DEBUG information be aligned with DEBUG_GCD enabled.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17470 6f19259b-4bc3-4df7-8a09-765794883524
When the ExecuteSmmCoreFromSmram() function fails, SmmIplEntry()
restores the SMRAM range to EFI_MEMORY_UC. However, it saves the
return value of gDS->SetMemorySpaceAttributes() in the same Status
variable that gDS->contains the return value of ExecuteSmmCoreFromSmram().
Therefore, if gDS->SetMemorySpaceAttributes() succeeds, the failure
of ExecuteSmmCoreFromSmram() is masked, and Bad Things Happen (TM).
Introduce a temporary variable just for the return value of
gDS->SetMemorySpaceAttributes().
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17417 6f19259b-4bc3-4df7-8a09-765794883524
PI 1.4 clarified SMM register protocol notify function return status as below:
EFI_SUCCESS Successfully returned the registration record that has
been added or unhooked
EFI_INVALID_PARAMETER Protocol is NULL or Registration is NULL
The implementation of SmmRegisterProtocolNotify() already followed this new
rule, needn't to be updated.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17349 6f19259b-4bc3-4df7-8a09-765794883524
Roll back report status code to original place in RuntimeDriverSetVirtualAddressMap.
Per UEFI spec, the call to SetVirtualAddressMap() must be done with the physical mappings.
We can not assume virtual address could work in SetVirtualAddressMap (), so we can not call
report status code interface with virtual address.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Elvin Li <elvin.li@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17139 6f19259b-4bc3-4df7-8a09-765794883524
Move report status code to the end after all pointers convert and image relocation are finished.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Elvin Li <elvin.li@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17136 6f19259b-4bc3-4df7-8a09-765794883524
For PI spec vol3,
"EFI_SW_DXE_BS_PC_VIRTUAL_ADDRESS_CHANGE_EVENT The EVT_SIGNAL_VIRTUAL_ADDRESS_CHANGE event was signaled."
However, in current code base, it is reported before events were signaled.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Elvin Li <elvin.li@intel.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17123 6f19259b-4bc3-4df7-8a09-765794883524
Add status code report and cpu deadloop when DXE IPL PPI is not found.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Elvin Li <elvin.li@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17087 6f19259b-4bc3-4df7-8a09-765794883524
PeiCore calls REPORT_STATUS_CODE_WITH_EXTENDED_DATA() with its internal structure for Image
dispatcher. No code consumes it. But, it brings confuse.
DxeCore and SmmCore calls REPORT_STATUS_CODE_WITH_EXTENDED_DATA() with Handle only.
To be consistent, update PeiCore to be same to DxeCore and SmmCore.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17085 6f19259b-4bc3-4df7-8a09-765794883524
On AArch64, the OS can choose to run with a page size of 64 KB,
making it cumbersome to deal with UEFI reserved memory regions
whose boundaries are not 64 KB aligned.
So increase the allocation granularity for runtime regions to
64 KB.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17016 6f19259b-4bc3-4df7-8a09-765794883524
This patch changes the allocation logic for the pool allocator to only
allocate additional pages if the requested allocation cannot be fulfilled
from the current bin or any of the larger ones. If there are larger blocks
available, they will be used to serve the allocation, and the remainder will
be carved up into smaller blocks using the existing carving up logic.
Note that all pool sizes are a multiple of the smallest pool size, so it is
guaranteed that the remainder will be carved up without spilling. Due to the
exponential nature of the pool sizes, the amount of work is logarithmic in
the size of the available block.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17015 6f19259b-4bc3-4df7-8a09-765794883524
In preparation of the next patch, that serves allocations from higher-up
bins if the current bin is depleted, this patch updates the carving up
strategy to populate the largest bins first. To ensure that there will
always be an allocation of the appropriate size made, the current allocation
request is served first from the newly allocated memory region.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17014 6f19259b-4bc3-4df7-8a09-765794883524
In preparation of making the pool code capable of serving allocations
from higher-up bins, update the free path to traverse a candidate page
by following the index of POOL_FREE header instead of duplicating the
carving logic that was used at page allocation time. This allows chunks
to be split into smaller ones, where one can be returned to serve the
allocation, and the other stored in a smaller bin for later use.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17013 6f19259b-4bc3-4df7-8a09-765794883524
The existing linear mapping between allocation size and pool index
does not scale when moving to a 64 KB granularity or beyond. With
a granularity of 64 KB, 2048 (!) bins will be created for each
memory type, each differing 32 bytes in size with the next one.
Instead, introduce an exponential scheme where each bin size is
the sum of the two previous ones.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Liming Gao <liming.gao@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17012 6f19259b-4bc3-4df7-8a09-765794883524