The ID mapping routines on virtual platforms simply map the entire
hardware supported physical address space as device memory, and then
punch some holes for regions that need to be mapped cacheable.
On virtual platforms hosted on CPUs that support a large physical
address range, this may result in a lot of overhead, i.e., 4 KB of page
tables for each 512 GB of address space, which quickly adds up (i.e.,
2 MB for the architectural maximum of 48 bits).
Since there may be a platform specific limit to the size of the (I)PA
space that is not reflected by CPU id registers, restrict the range of
the ID mapping to gEmbeddedTokenSpaceGuid.PcdPrePiCpuMemorySize bits.
This makes sense by itself, since we cannot manipulate mappings above
that limit anwyay (because they are not covered by GCD), and it allows
the PCD be set to a lower value by platforms whose (I)PA space is
smaller than the hardware supported maximum.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Wei Huang <wei@redhat.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18929 6f19259b-4bc3-4df7-8a09-765794883524