2009-05-27 23:10:18 +02:00
|
|
|
/**@file
|
|
|
|
Memory Detection for Virtual Machines.
|
|
|
|
|
2016-04-21 08:31:55 +02:00
|
|
|
Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
|
2019-04-04 01:06:33 +02:00
|
|
|
SPDX-License-Identifier: BSD-2-Clause-Patent
|
2009-05-27 23:10:18 +02:00
|
|
|
|
|
|
|
Module Name:
|
|
|
|
|
|
|
|
MemDetect.c
|
|
|
|
|
|
|
|
**/
|
|
|
|
|
|
|
|
//
|
|
|
|
// The package level header files this module uses
|
|
|
|
//
|
OvmfPkg/PlatformPei: support >=1TB high RAM, and discontiguous high RAM
In OVMF we currently get the upper (>=4GB) memory size with the
GetSystemMemorySizeAbove4gb() function.
The GetSystemMemorySizeAbove4gb() function is used in two places:
(1) It is the starting point of the calculations in GetFirstNonAddress().
GetFirstNonAddress() in turn
- determines the placement of the 64-bit PCI MMIO aperture,
- provides input for the GCD memory space map's sizing (see
AddressWidthInitialization(), and the CPU HOB in
MiscInitialization()),
- influences the permanent PEI RAM cap (the DXE core's page tables,
built in permanent PEI RAM, grow as the RAM to map grows).
(2) In QemuInitializeRam(), GetSystemMemorySizeAbove4gb() determines the
single memory descriptor HOB that we produce for the upper memory.
Respectively, there are two problems with GetSystemMemorySizeAbove4gb():
(1) It reads a 24-bit count of 64KB RAM chunks from the CMOS, and
therefore cannot return a larger value than one terabyte.
(2) It cannot express discontiguous high RAM.
Starting with version 1.7.0, QEMU has provided the fw_cfg file called
"etc/e820". Refer to the following QEMU commits:
- 0624c7f916b4 ("e820: pass high memory too.", 2013-10-10),
- 7d67110f2d9a ("pc: add etc/e820 fw_cfg file", 2013-10-18)
- 7db16f2480db ("pc: register e820 entries for ram", 2013-10-10)
Ever since these commits in v1.7.0 -- with the last QEMU release being
v2.9.0, and v2.10.0 under development --, the only two RAM entries added
to this E820 map correspond to the below-4GB RAM range, and the above-4GB
RAM range. And, the above-4GB range exactly matches the CMOS registers in
question; see the use of "pcms->above_4g_mem_size":
pc_q35_init() | pc_init1()
pc_memory_init()
e820_add_entry(0x100000000ULL, pcms->above_4g_mem_size, E820_RAM);
pc_cmos_init()
val = pcms->above_4g_mem_size / 65536;
rtc_set_memory(s, 0x5b, val);
rtc_set_memory(s, 0x5c, val >> 8);
rtc_set_memory(s, 0x5d, val >> 16);
Therefore, remedy the above OVMF limitations as follows:
(1) Start off GetFirstNonAddress() by scanning the E820 map for the
highest exclusive >=4GB RAM address. Fall back to the CMOS if the E820
map is unavailable. Base all further calculations (such as 64-bit PCI
MMIO aperture placement, GCD sizing etc) on this value.
At the moment, the only difference this change makes is that we can
have more than 1TB above 4GB -- given that the sole "high RAM" entry
in the E820 map matches the CMOS exactly, modulo the most significant
bits (see above).
However, Igor plans to add discontiguous (cold-plugged) high RAM to
the fw_cfg E820 RAM map later on, and then this scanning will adapt
automatically.
(2) In QemuInitializeRam(), describe the high RAM regions from the E820
map one by one with memory HOBs. Fall back to the CMOS only if the
E820 map is missing.
Again, right now this change only makes a difference if there is at
least 1TB high RAM. Later on it will adapt to discontiguous high RAM
(regardless of its size) automatically.
-*-
Implementation details: introduce the ScanOrAdd64BitE820Ram() function,
which reads the E820 entries from fw_cfg, and finds the highest exclusive
>=4GB RAM address, or produces memory resource descriptor HOBs for RAM
entries that start at or above 4GB. The RAM map is not read in a single
go, because its size can vary, and in PlatformPei we should stay away from
dynamic memory allocation, for the following reasons:
- "Pool" allocations are limited to ~64KB, are served from HOBs, and
cannot be released ever.
- "Page" allocations are seriously limited before PlatformPei installs the
permanent PEI RAM. Furthermore, page allocations can only be released in
DXE, with dedicated code (so the address would have to be passed on with
a HOB or PCD).
- Raw memory allocation HOBs would require the same freeing in DXE.
Therefore we process each E820 entry as soon as it is read from fw_cfg.
-*-
Considering the impact of high RAM on the DXE core:
A few years ago, installing high RAM as *tested* would cause the DXE core
to inhabit such ranges rather than carving out its home from the permanent
PEI RAM. Fortunately, this was fixed in the following edk2 commit:
3a05b13106d1, "MdeModulePkg DxeCore: Take the range in resource HOB for
PHIT as higher priority", 2015-09-18
which I regression-tested at the time:
http://mid.mail-archive.com/55FC27B0.4070807@redhat.com
Later on, OVMF was changed to install its high RAM as tested (effectively
"arming" the earlier DXE core change for OVMF), in the following edk2
commit:
035ce3b37c90, "OvmfPkg/PlatformPei: Add memory above 4GB as tested",
2016-04-21
which I also regression-tested at the time:
http://mid.mail-archive.com/571E8B90.1020102@redhat.com
Therefore adding more "tested memory" HOBs is safe.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1468526
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
2017-07-08 01:28:37 +02:00
|
|
|
#include <IndustryStandard/E820.h>
|
OvmfPkg/PlatformPei: set 32-bit UC area at PciBase / PciExBarBase (pc/q35)
(This is a replacement for commit 39b9a5ffe661 ("OvmfPkg/PlatformPei: fix
MTRR for low-RAM sizes that have many bits clear", 2019-05-16).)
Reintroduce the same logic as seen in commit 39b9a5ffe661 for the pc
(i440fx) board type.
For q35, the same approach doesn't work any longer, given that (a) we'd
like to keep the PCIEXBAR in the platform DSC a fixed-at-build PCD, and
(b) QEMU expects the PCIEXBAR to reside at a lower address than the 32-bit
PCI MMIO aperture.
Therefore, introduce a helper function for determining the 32-bit
"uncacheable" (MMIO) area base address:
- On q35, this function behaves statically. Furthermore, the MTRR setup
exploits that the range [0xB000_0000, 0xFFFF_FFFF] can be marked UC with
just two variable MTRRs (one at 0xB000_0000 (size 256MB), another at
0xC000_0000 (size 1GB)).
- On pc (i440fx), the function behaves dynamically, implementing the same
logic as commit 39b9a5ffe661 did. The PciBase value is adjusted to the
value calculated, similarly to commit 39b9a5ffe661. A further
simplification is that we show that the UC32 area size truncation to a
whole power of two automatically guarantees a >=2GB base address.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1859
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2019-05-29 14:49:55 +02:00
|
|
|
#include <IndustryStandard/I440FxPiix4.h>
|
2017-07-04 14:50:43 +02:00
|
|
|
#include <IndustryStandard/Q35MchIch9.h>
|
2021-12-10 15:41:54 +01:00
|
|
|
#include <IndustryStandard/CloudHv.h>
|
2022-03-02 14:31:36 +01:00
|
|
|
#include <IndustryStandard/Xen/arch-x86/hvm/start_info.h>
|
2009-05-27 23:10:18 +02:00
|
|
|
#include <PiPei.h>
|
2019-09-20 17:07:43 +02:00
|
|
|
#include <Register/Intel/SmramSaveStateMap.h>
|
2009-05-27 23:10:18 +02:00
|
|
|
|
|
|
|
//
|
|
|
|
// The Library classes this module consumes
|
|
|
|
//
|
2017-07-04 14:50:43 +02:00
|
|
|
#include <Library/BaseLib.h>
|
2014-03-04 09:03:23 +01:00
|
|
|
#include <Library/BaseMemoryLib.h>
|
2009-05-27 23:10:18 +02:00
|
|
|
#include <Library/DebugLib.h>
|
|
|
|
#include <Library/HobLib.h>
|
|
|
|
#include <Library/IoLib.h>
|
OvmfPkg/PlatformPei: Reserve GHCB-related areas if S3 is supported
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2198
Protect the memory used by an SEV-ES guest when S3 is supported. This
includes the page table used to break down the 2MB page that contains
the GHCB so that it can be marked un-encrypted, as well as the GHCB
area.
Regarding the lifecycle of the GHCB-related memory areas:
PcdOvmfSecGhcbPageTableBase
PcdOvmfSecGhcbBase
(a) when and how it is initialized after first boot of the VM
If SEV-ES is enabled, the GHCB-related areas are initialized during
the SEC phase [OvmfPkg/ResetVector/Ia32/PageTables64.asm].
(b) how it is protected from memory allocations during DXE
If S3 and SEV-ES are enabled, then InitializeRamRegions()
[OvmfPkg/PlatformPei/MemDetect.c] protects the ranges with an AcpiNVS
memory allocation HOB, in PEI.
If S3 is disabled, then these ranges are not protected. DXE's own page
tables are first built while still in PEI (see HandOffToDxeCore()
[MdeModulePkg/Core/DxeIplPeim/X64/DxeLoadFunc.c]). Those tables are
located in permanent PEI memory. After CR3 is switched over to them
(which occurs before jumping to the DXE core entry point), we don't have
to preserve PcdOvmfSecGhcbPageTableBase. PEI switches to GHCB pages in
permanent PEI memory and DXE will use these PEI GHCB pages, so we don't
have to preserve PcdOvmfSecGhcbBase.
(c) how it is protected from the OS
If S3 is enabled, then (b) reserves it from the OS too.
If S3 is disabled, then the range needs no protection.
(d) how it is accessed on the S3 resume path
It is rewritten same as in (a), which is fine because (b) reserved it.
(e) how it is accessed on the warm reset path
It is rewritten same as in (a).
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@arm.com>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Julien Grall <julien@xen.org>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
2020-08-12 22:21:40 +02:00
|
|
|
#include <Library/MemEncryptSevLib.h>
|
2010-01-04 17:17:59 +01:00
|
|
|
#include <Library/PcdLib.h>
|
2017-07-04 14:50:43 +02:00
|
|
|
#include <Library/PciLib.h>
|
2009-05-27 23:10:18 +02:00
|
|
|
#include <Library/PeimEntryPoint.h>
|
|
|
|
#include <Library/ResourcePublicationLib.h>
|
2022-03-07 03:26:39 +01:00
|
|
|
|
OvmfPkg: PlatformPei: determine the 64-bit PCI host aperture for X64 DXE
The main observation about the 64-bit PCI host aperture is that it is the
highest part of the useful address space. It impacts the top of the GCD
memory space map, and, consequently, our maximum address width calculation
for the CPU HOB too.
Thus, modify the GetFirstNonAddress() function to consider the following
areas above the high RAM, while calculating the first non-address (i.e.,
the highest inclusive address, plus one):
- the memory hotplug area (optional, the size comes from QEMU),
- the 64-bit PCI host aperture (we set a default size).
While computing the first non-address, capture the base and the size of
the 64-bit PCI host aperture at once in PCDs, since they are natural parts
of the calculation.
(Similarly to how PcdPciMmio32* are not rewritten on the S3 resume path
(see the InitializePlatform() -> MemMapInitialization() condition), nor
are PcdPciMmio64*. Only the core PciHostBridgeDxe driver consumes them,
through our PciHostBridgeLib instance.)
Set 32GB as the default size for the aperture. Issue#59 mentions the
NVIDIA Tesla K80 as an assignable device. According to nvidia.com, these
cards may have 24GB of memory (probably 16GB + 8GB BARs).
As a strictly experimental feature, the user can specify the size of the
aperture (in MB) as well, with the QEMU option
-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536
The "X-" prefix follows the QEMU tradition (spelled "x-" there), meaning
that the property is experimental, unstable, and might go away any time.
Gerd has proposed heuristics for sizing the aperture automatically (based
on 1GB page support and PCPU address width), but such should be delayed to
a later patch (which may very well back out "X-PciMmio64Mb" then).
For "everyday" guests, the 32GB default for the aperture size shouldn't
impact the PEI memory demand (the size of the page tables that the DXE IPL
PEIM builds). Namely, we've never reported narrower than 36-bit addresses;
the DXE IPL PEIM has always built page tables for 64GB at least.
For the aperture to bump the address width above 36 bits, either the guest
must have quite a bit of memory itself (in which case the additional PEI
memory demand shouldn't matter), or the user must specify a large aperture
manually with "X-PciMmio64Mb" (and then he or she is also responsible for
giving enough RAM to the VM, to satisfy the PEI memory demand).
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Marcel Apfelbaum <marcel@redhat.com>
Cc: Thomas Lamprecht <t.lamprecht@proxmox.com>
Ref: https://github.com/tianocore/edk2/issues/59
Ref: http://www.nvidia.com/object/tesla-servers.html
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
2016-03-04 19:30:45 +01:00
|
|
|
#include <Library/QemuFwCfgLib.h>
|
2020-04-24 09:53:48 +02:00
|
|
|
#include <Library/QemuFwCfgSimpleParserLib.h>
|
2009-05-27 23:10:18 +02:00
|
|
|
#include "Platform.h"
|
|
|
|
|
2017-07-04 12:44:05 +02:00
|
|
|
VOID
|
|
|
|
Q35TsegMbytesInitialization (
|
2022-12-02 14:10:00 +01:00
|
|
|
IN OUT EFI_HOB_PLATFORM_INFO *PlatformInfoHob
|
2017-07-04 12:44:05 +02:00
|
|
|
)
|
|
|
|
{
|
2017-07-04 14:50:43 +02:00
|
|
|
UINT16 ExtendedTsegMbytes;
|
|
|
|
RETURN_STATUS PcdStatus;
|
|
|
|
|
2022-12-02 14:10:00 +01:00
|
|
|
ASSERT (PlatformInfoHob->HostBridgeDevId == INTEL_Q35_MCH_DEVICE_ID);
|
2017-07-04 14:50:43 +02:00
|
|
|
|
|
|
|
//
|
|
|
|
// Check if QEMU offers an extended TSEG.
|
|
|
|
//
|
|
|
|
// This can be seen from writing MCH_EXT_TSEG_MB_QUERY to the MCH_EXT_TSEG_MB
|
|
|
|
// register, and reading back the register.
|
|
|
|
//
|
|
|
|
// On a QEMU machine type that does not offer an extended TSEG, the initial
|
|
|
|
// write overwrites whatever value a malicious guest OS may have placed in
|
|
|
|
// the (unimplemented) register, before entering S3 or rebooting.
|
|
|
|
// Subsequently, the read returns MCH_EXT_TSEG_MB_QUERY unchanged.
|
|
|
|
//
|
|
|
|
// On a QEMU machine type that offers an extended TSEG, the initial write
|
|
|
|
// triggers an update to the register. Subsequently, the value read back
|
|
|
|
// (which is guaranteed to differ from MCH_EXT_TSEG_MB_QUERY) tells us the
|
|
|
|
// number of megabytes.
|
|
|
|
//
|
|
|
|
PciWrite16 (DRAMC_REGISTER_Q35 (MCH_EXT_TSEG_MB), MCH_EXT_TSEG_MB_QUERY);
|
|
|
|
ExtendedTsegMbytes = PciRead16 (DRAMC_REGISTER_Q35 (MCH_EXT_TSEG_MB));
|
|
|
|
if (ExtendedTsegMbytes == MCH_EXT_TSEG_MB_QUERY) {
|
2022-12-02 14:10:00 +01:00
|
|
|
PlatformInfoHob->Q35TsegMbytes = PcdGet16 (PcdQ35TsegMbytes);
|
2017-07-04 14:50:43 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
DEBUG ((
|
|
|
|
DEBUG_INFO,
|
|
|
|
"%a: QEMU offers an extended TSEG (%d MB)\n",
|
2023-04-06 21:49:41 +02:00
|
|
|
__func__,
|
2017-07-04 14:50:43 +02:00
|
|
|
ExtendedTsegMbytes
|
|
|
|
));
|
|
|
|
PcdStatus = PcdSet16S (PcdQ35TsegMbytes, ExtendedTsegMbytes);
|
|
|
|
ASSERT_RETURN_ERROR (PcdStatus);
|
2022-12-02 14:10:00 +01:00
|
|
|
PlatformInfoHob->Q35TsegMbytes = ExtendedTsegMbytes;
|
2017-07-04 12:44:05 +02:00
|
|
|
}
|
|
|
|
|
2019-09-20 14:02:14 +02:00
|
|
|
VOID
|
|
|
|
Q35SmramAtDefaultSmbaseInitialization (
|
2022-12-02 14:10:00 +01:00
|
|
|
IN OUT EFI_HOB_PLATFORM_INFO *PlatformInfoHob
|
2019-09-20 14:02:14 +02:00
|
|
|
)
|
|
|
|
{
|
|
|
|
RETURN_STATUS PcdStatus;
|
|
|
|
|
2022-12-02 14:10:00 +01:00
|
|
|
ASSERT (PlatformInfoHob->HostBridgeDevId == INTEL_Q35_MCH_DEVICE_ID);
|
2019-09-20 14:02:14 +02:00
|
|
|
|
2022-12-02 14:10:00 +01:00
|
|
|
PlatformInfoHob->Q35SmramAtDefaultSmbase = FALSE;
|
OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (for real)
Now that the SMRAM at the default SMBASE is honored everywhere necessary,
implement the actual detection. The (simple) steps are described in
previous patch "OvmfPkg/IndustryStandard: add MCH_DEFAULT_SMBASE* register
macros".
Regarding CSM_ENABLE builds: according to the discussion with Jiewen at
https://edk2.groups.io/g/devel/message/48082
http://mid.mail-archive.com/74D8A39837DF1E4DA445A8C0B3885C503F7C9D2F@shsmsx102.ccr.corp.intel.com
if the platform has SMRAM at the default SMBASE, then we have to
(a) either punch a hole in the legacy E820 map as well, in
LegacyBiosBuildE820() [OvmfPkg/Csm/LegacyBiosDxe/LegacyBootSupport.c],
(b) or document, or programmatically catch, the incompatibility between
the "SMRAM at default SMBASE" and "CSM" features.
Because CSM is out of scope for the larger "VCPU hotplug with SMM"
feature, option (b) applies. Therefore, if the CSM is enabled in the OVMF
build, then PlatformPei will not attempt to detect SMRAM at the default
SMBASE, at all. This is approach (4) -- the most flexible one, for
end-users -- from:
http://mid.mail-archive.com/868dcff2-ecaa-e1c6-f018-abe7087d640c@redhat.com
https://edk2.groups.io/g/devel/message/48348
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1512
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20200129214412.2361-12-lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2019-09-22 11:52:48 +02:00
|
|
|
if (FeaturePcdGet (PcdCsmEnable)) {
|
|
|
|
DEBUG ((
|
|
|
|
DEBUG_INFO,
|
|
|
|
"%a: SMRAM at default SMBASE not checked due to CSM\n",
|
2023-04-06 21:49:41 +02:00
|
|
|
__func__
|
OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (for real)
Now that the SMRAM at the default SMBASE is honored everywhere necessary,
implement the actual detection. The (simple) steps are described in
previous patch "OvmfPkg/IndustryStandard: add MCH_DEFAULT_SMBASE* register
macros".
Regarding CSM_ENABLE builds: according to the discussion with Jiewen at
https://edk2.groups.io/g/devel/message/48082
http://mid.mail-archive.com/74D8A39837DF1E4DA445A8C0B3885C503F7C9D2F@shsmsx102.ccr.corp.intel.com
if the platform has SMRAM at the default SMBASE, then we have to
(a) either punch a hole in the legacy E820 map as well, in
LegacyBiosBuildE820() [OvmfPkg/Csm/LegacyBiosDxe/LegacyBootSupport.c],
(b) or document, or programmatically catch, the incompatibility between
the "SMRAM at default SMBASE" and "CSM" features.
Because CSM is out of scope for the larger "VCPU hotplug with SMM"
feature, option (b) applies. Therefore, if the CSM is enabled in the OVMF
build, then PlatformPei will not attempt to detect SMRAM at the default
SMBASE, at all. This is approach (4) -- the most flexible one, for
end-users -- from:
http://mid.mail-archive.com/868dcff2-ecaa-e1c6-f018-abe7087d640c@redhat.com
https://edk2.groups.io/g/devel/message/48348
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1512
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20200129214412.2361-12-lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2019-09-22 11:52:48 +02:00
|
|
|
));
|
|
|
|
} else {
|
|
|
|
UINTN CtlReg;
|
|
|
|
UINT8 CtlRegVal;
|
|
|
|
|
|
|
|
CtlReg = DRAMC_REGISTER_Q35 (MCH_DEFAULT_SMBASE_CTL);
|
|
|
|
PciWrite8 (CtlReg, MCH_DEFAULT_SMBASE_QUERY);
|
2022-03-06 03:20:36 +01:00
|
|
|
CtlRegVal = PciRead8 (CtlReg);
|
2022-12-02 14:10:00 +01:00
|
|
|
PlatformInfoHob->Q35SmramAtDefaultSmbase = (BOOLEAN)(CtlRegVal ==
|
2022-03-06 03:20:36 +01:00
|
|
|
MCH_DEFAULT_SMBASE_IN_RAM);
|
OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (for real)
Now that the SMRAM at the default SMBASE is honored everywhere necessary,
implement the actual detection. The (simple) steps are described in
previous patch "OvmfPkg/IndustryStandard: add MCH_DEFAULT_SMBASE* register
macros".
Regarding CSM_ENABLE builds: according to the discussion with Jiewen at
https://edk2.groups.io/g/devel/message/48082
http://mid.mail-archive.com/74D8A39837DF1E4DA445A8C0B3885C503F7C9D2F@shsmsx102.ccr.corp.intel.com
if the platform has SMRAM at the default SMBASE, then we have to
(a) either punch a hole in the legacy E820 map as well, in
LegacyBiosBuildE820() [OvmfPkg/Csm/LegacyBiosDxe/LegacyBootSupport.c],
(b) or document, or programmatically catch, the incompatibility between
the "SMRAM at default SMBASE" and "CSM" features.
Because CSM is out of scope for the larger "VCPU hotplug with SMM"
feature, option (b) applies. Therefore, if the CSM is enabled in the OVMF
build, then PlatformPei will not attempt to detect SMRAM at the default
SMBASE, at all. This is approach (4) -- the most flexible one, for
end-users -- from:
http://mid.mail-archive.com/868dcff2-ecaa-e1c6-f018-abe7087d640c@redhat.com
https://edk2.groups.io/g/devel/message/48348
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1512
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20200129214412.2361-12-lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2019-09-22 11:52:48 +02:00
|
|
|
DEBUG ((
|
|
|
|
DEBUG_INFO,
|
|
|
|
"%a: SMRAM at default SMBASE %a\n",
|
2023-04-06 21:49:41 +02:00
|
|
|
__func__,
|
2022-12-02 14:10:00 +01:00
|
|
|
PlatformInfoHob->Q35SmramAtDefaultSmbase ? "found" : "not found"
|
OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (for real)
Now that the SMRAM at the default SMBASE is honored everywhere necessary,
implement the actual detection. The (simple) steps are described in
previous patch "OvmfPkg/IndustryStandard: add MCH_DEFAULT_SMBASE* register
macros".
Regarding CSM_ENABLE builds: according to the discussion with Jiewen at
https://edk2.groups.io/g/devel/message/48082
http://mid.mail-archive.com/74D8A39837DF1E4DA445A8C0B3885C503F7C9D2F@shsmsx102.ccr.corp.intel.com
if the platform has SMRAM at the default SMBASE, then we have to
(a) either punch a hole in the legacy E820 map as well, in
LegacyBiosBuildE820() [OvmfPkg/Csm/LegacyBiosDxe/LegacyBootSupport.c],
(b) or document, or programmatically catch, the incompatibility between
the "SMRAM at default SMBASE" and "CSM" features.
Because CSM is out of scope for the larger "VCPU hotplug with SMM"
feature, option (b) applies. Therefore, if the CSM is enabled in the OVMF
build, then PlatformPei will not attempt to detect SMRAM at the default
SMBASE, at all. This is approach (4) -- the most flexible one, for
end-users -- from:
http://mid.mail-archive.com/868dcff2-ecaa-e1c6-f018-abe7087d640c@redhat.com
https://edk2.groups.io/g/devel/message/48348
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1512
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20200129214412.2361-12-lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2019-09-22 11:52:48 +02:00
|
|
|
));
|
|
|
|
}
|
|
|
|
|
2019-09-20 14:02:14 +02:00
|
|
|
PcdStatus = PcdSetBoolS (
|
|
|
|
PcdQ35SmramAtDefaultSmbase,
|
2022-12-02 14:10:00 +01:00
|
|
|
PlatformInfoHob->Q35SmramAtDefaultSmbase
|
2019-09-20 14:02:14 +02:00
|
|
|
);
|
|
|
|
ASSERT_RETURN_ERROR (PcdStatus);
|
|
|
|
}
|
|
|
|
|
2022-03-06 14:31:41 +01:00
|
|
|
/**
|
|
|
|
Initialize the PhysMemAddressWidth field in PlatformInfoHob based on guest RAM size.
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
AddressWidthInitialization (
|
|
|
|
IN OUT EFI_HOB_PLATFORM_INFO *PlatformInfoHob
|
|
|
|
)
|
|
|
|
{
|
|
|
|
RETURN_STATUS PcdStatus;
|
|
|
|
|
|
|
|
PlatformAddressWidthInitialization (PlatformInfoHob);
|
|
|
|
|
|
|
|
//
|
|
|
|
// If DXE is 32-bit, then we're done; PciBusDxe will degrade 64-bit MMIO
|
|
|
|
// resources to 32-bit anyway. See DegradeResource() in
|
|
|
|
// "PciResourceSupport.c".
|
|
|
|
//
|
|
|
|
#ifdef MDE_CPU_IA32
|
|
|
|
if (!FeaturePcdGet (PcdDxeIplSwitchToLongMode)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
|
|
|
if (PlatformInfoHob->PcdPciMmio64Size == 0) {
|
|
|
|
if (PlatformInfoHob->BootMode != BOOT_ON_S3_RESUME) {
|
|
|
|
DEBUG ((
|
|
|
|
DEBUG_INFO,
|
|
|
|
"%a: disabling 64-bit PCI host aperture\n",
|
2023-04-06 21:49:41 +02:00
|
|
|
__func__
|
2022-03-06 14:31:41 +01:00
|
|
|
));
|
|
|
|
PcdStatus = PcdSet64S (PcdPciMmio64Size, 0);
|
|
|
|
ASSERT_RETURN_ERROR (PcdStatus);
|
|
|
|
}
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (PlatformInfoHob->BootMode != BOOT_ON_S3_RESUME) {
|
|
|
|
//
|
|
|
|
// The core PciHostBridgeDxe driver will automatically add this range to
|
|
|
|
// the GCD memory space map through our PciHostBridgeLib instance; here we
|
|
|
|
// only need to set the PCDs.
|
|
|
|
//
|
|
|
|
PcdStatus = PcdSet64S (PcdPciMmio64Base, PlatformInfoHob->PcdPciMmio64Base);
|
|
|
|
ASSERT_RETURN_ERROR (PcdStatus);
|
|
|
|
PcdStatus = PcdSet64S (PcdPciMmio64Size, PlatformInfoHob->PcdPciMmio64Size);
|
|
|
|
ASSERT_RETURN_ERROR (PcdStatus);
|
|
|
|
|
|
|
|
DEBUG ((
|
|
|
|
DEBUG_INFO,
|
|
|
|
"%a: Pci64Base=0x%Lx Pci64Size=0x%Lx\n",
|
2023-04-06 21:49:41 +02:00
|
|
|
__func__,
|
2022-03-06 14:31:41 +01:00
|
|
|
PlatformInfoHob->PcdPciMmio64Base,
|
|
|
|
PlatformInfoHob->PcdPciMmio64Size
|
|
|
|
));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-06-26 18:09:39 +02:00
|
|
|
/**
|
|
|
|
Calculate the cap for the permanent PEI memory.
|
|
|
|
**/
|
|
|
|
STATIC
|
|
|
|
UINT32
|
|
|
|
GetPeiMemoryCap (
|
2022-12-02 14:10:01 +01:00
|
|
|
IN EFI_HOB_PLATFORM_INFO *PlatformInfoHob
|
2015-06-26 18:09:39 +02:00
|
|
|
)
|
|
|
|
{
|
|
|
|
BOOLEAN Page1GSupport;
|
|
|
|
UINT32 RegEax;
|
|
|
|
UINT32 RegEdx;
|
|
|
|
UINT32 Pml4Entries;
|
|
|
|
UINT32 PdpEntries;
|
|
|
|
UINTN TotalPages;
|
|
|
|
|
|
|
|
//
|
|
|
|
// If DXE is 32-bit, then just return the traditional 64 MB cap.
|
|
|
|
//
|
|
|
|
#ifdef MDE_CPU_IA32
|
|
|
|
if (!FeaturePcdGet (PcdDxeIplSwitchToLongMode)) {
|
|
|
|
return SIZE_64MB;
|
|
|
|
}
|
2021-12-05 23:54:09 +01:00
|
|
|
|
2015-06-26 18:09:39 +02:00
|
|
|
#endif
|
|
|
|
|
|
|
|
//
|
|
|
|
// Dependent on physical address width, PEI memory allocations can be
|
|
|
|
// dominated by the page tables built for 64-bit DXE. So we key the cap off
|
|
|
|
// of those. The code below is based on CreateIdentityMappingPageTables() in
|
|
|
|
// "MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c".
|
|
|
|
//
|
|
|
|
Page1GSupport = FALSE;
|
|
|
|
if (PcdGetBool (PcdUse1GPageTable)) {
|
|
|
|
AsmCpuid (0x80000000, &RegEax, NULL, NULL, NULL);
|
|
|
|
if (RegEax >= 0x80000001) {
|
|
|
|
AsmCpuid (0x80000001, NULL, NULL, NULL, &RegEdx);
|
|
|
|
if ((RegEdx & BIT26) != 0) {
|
|
|
|
Page1GSupport = TRUE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-12-02 14:10:01 +01:00
|
|
|
if (PlatformInfoHob->PhysMemAddressWidth <= 39) {
|
2015-06-26 18:09:39 +02:00
|
|
|
Pml4Entries = 1;
|
2022-12-02 14:10:01 +01:00
|
|
|
PdpEntries = 1 << (PlatformInfoHob->PhysMemAddressWidth - 30);
|
2015-06-26 18:09:39 +02:00
|
|
|
ASSERT (PdpEntries <= 0x200);
|
|
|
|
} else {
|
2022-12-02 14:10:01 +01:00
|
|
|
if (PlatformInfoHob->PhysMemAddressWidth > 48) {
|
2022-01-20 04:04:17 +01:00
|
|
|
Pml4Entries = 0x200;
|
|
|
|
} else {
|
2022-12-02 14:10:01 +01:00
|
|
|
Pml4Entries = 1 << (PlatformInfoHob->PhysMemAddressWidth - 39);
|
2022-01-20 04:04:17 +01:00
|
|
|
}
|
|
|
|
|
2015-06-26 18:09:39 +02:00
|
|
|
ASSERT (Pml4Entries <= 0x200);
|
|
|
|
PdpEntries = 512;
|
|
|
|
}
|
|
|
|
|
|
|
|
TotalPages = Page1GSupport ? Pml4Entries + 1 :
|
|
|
|
(PdpEntries + 1) * Pml4Entries + 1;
|
|
|
|
ASSERT (TotalPages <= 0x40201);
|
|
|
|
|
|
|
|
//
|
|
|
|
// Add 64 MB for miscellaneous allocations. Note that for
|
2022-03-06 14:31:41 +01:00
|
|
|
// PhysMemAddressWidth values close to 36, the cap will actually be
|
2015-06-26 18:09:39 +02:00
|
|
|
// dominated by this increment.
|
|
|
|
//
|
|
|
|
return (UINT32)(EFI_PAGES_TO_SIZE (TotalPages) + SIZE_64MB);
|
|
|
|
}
|
|
|
|
|
2013-12-08 02:36:07 +01:00
|
|
|
/**
|
|
|
|
Publish PEI core memory
|
|
|
|
|
|
|
|
@return EFI_SUCCESS The PEIM initialized successfully.
|
|
|
|
|
|
|
|
**/
|
|
|
|
EFI_STATUS
|
|
|
|
PublishPeiMemory (
|
2022-12-02 14:10:01 +01:00
|
|
|
IN OUT EFI_HOB_PLATFORM_INFO *PlatformInfoHob
|
2013-12-08 02:36:07 +01:00
|
|
|
)
|
|
|
|
{
|
|
|
|
EFI_STATUS Status;
|
|
|
|
EFI_PHYSICAL_ADDRESS MemoryBase;
|
|
|
|
UINT64 MemorySize;
|
2016-07-14 17:59:44 +02:00
|
|
|
UINT32 LowerMemorySize;
|
2015-06-26 18:09:39 +02:00
|
|
|
UINT32 PeiMemoryCap;
|
2022-03-06 03:20:36 +01:00
|
|
|
UINT32 S3AcpiReservedMemoryBase;
|
|
|
|
UINT32 S3AcpiReservedMemorySize;
|
2013-12-08 02:36:07 +01:00
|
|
|
|
2023-01-17 13:16:26 +01:00
|
|
|
PlatformGetSystemMemorySizeBelow4gb (PlatformInfoHob);
|
|
|
|
LowerMemorySize = PlatformInfoHob->LowMemory;
|
2022-12-02 14:10:01 +01:00
|
|
|
if (PlatformInfoHob->SmmSmramRequire) {
|
2016-07-13 00:52:54 +02:00
|
|
|
//
|
|
|
|
// TSEG is chipped from the end of low RAM
|
|
|
|
//
|
2022-12-02 14:10:01 +01:00
|
|
|
LowerMemorySize -= PlatformInfoHob->Q35TsegMbytes * SIZE_1MB;
|
2016-07-13 00:52:54 +02:00
|
|
|
}
|
|
|
|
|
2022-03-06 03:20:36 +01:00
|
|
|
S3AcpiReservedMemoryBase = 0;
|
|
|
|
S3AcpiReservedMemorySize = 0;
|
|
|
|
|
2016-07-13 00:52:54 +02:00
|
|
|
//
|
|
|
|
// If S3 is supported, then the S3 permanent PEI memory is placed next,
|
|
|
|
// downwards. Its size is primarily dictated by CpuMpPei. The formula below
|
|
|
|
// is an approximation.
|
|
|
|
//
|
2022-12-02 14:10:01 +01:00
|
|
|
if (PlatformInfoHob->S3Supported) {
|
2022-03-06 03:20:36 +01:00
|
|
|
S3AcpiReservedMemorySize = SIZE_512KB +
|
2022-12-02 14:10:01 +01:00
|
|
|
PlatformInfoHob->PcdCpuMaxLogicalProcessorNumber *
|
2022-03-06 03:20:36 +01:00
|
|
|
PcdGet32 (PcdCpuApStackSize);
|
|
|
|
S3AcpiReservedMemoryBase = LowerMemorySize - S3AcpiReservedMemorySize;
|
|
|
|
LowerMemorySize = S3AcpiReservedMemoryBase;
|
2016-07-13 00:52:54 +02:00
|
|
|
}
|
|
|
|
|
2022-12-02 14:10:01 +01:00
|
|
|
PlatformInfoHob->S3AcpiReservedMemoryBase = S3AcpiReservedMemoryBase;
|
|
|
|
PlatformInfoHob->S3AcpiReservedMemorySize = S3AcpiReservedMemorySize;
|
2022-03-06 03:20:36 +01:00
|
|
|
|
2022-12-02 14:10:01 +01:00
|
|
|
if (PlatformInfoHob->BootMode == BOOT_ON_S3_RESUME) {
|
2022-03-06 03:20:36 +01:00
|
|
|
MemoryBase = S3AcpiReservedMemoryBase;
|
|
|
|
MemorySize = S3AcpiReservedMemorySize;
|
2014-03-04 09:02:16 +01:00
|
|
|
} else {
|
2022-12-02 14:10:01 +01:00
|
|
|
PeiMemoryCap = GetPeiMemoryCap (PlatformInfoHob);
|
2020-04-29 23:53:27 +02:00
|
|
|
DEBUG ((
|
|
|
|
DEBUG_INFO,
|
2022-03-06 14:31:41 +01:00
|
|
|
"%a: PhysMemAddressWidth=%d PeiMemoryCap=%u KB\n",
|
2023-04-06 21:49:41 +02:00
|
|
|
__func__,
|
2022-12-02 14:10:01 +01:00
|
|
|
PlatformInfoHob->PhysMemAddressWidth,
|
2015-06-26 18:09:39 +02:00
|
|
|
PeiMemoryCap >> 10
|
|
|
|
));
|
|
|
|
|
2014-03-04 09:02:16 +01:00
|
|
|
//
|
|
|
|
// Determine the range of memory to use during PEI
|
|
|
|
//
|
OvmfPkg: decompress FVs on S3 resume if SMM_REQUIRE is set
If OVMF was built with -D SMM_REQUIRE, that implies that the runtime OS is
not trusted and we should defend against it tampering with the firmware's
data.
One such datum is the PEI firmware volume (PEIFV). Normally PEIFV is
decompressed on the first boot by SEC, then the OS preserves it across S3
suspend-resume cycles; at S3 resume SEC just reuses the originally
decompressed PEIFV.
However, if we don't trust the OS, then SEC must decompress PEIFV from the
pristine flash every time, lest we execute OS-injected code or work with
OS-injected data.
Due to how FVMAIN_COMPACT is organized, we can't decompress just PEIFV;
the decompression brings DXEFV with itself, plus it uses a temporary
output buffer and a scratch buffer too, which even reach above the end of
the finally installed DXEFV. For this reason we must keep away a
non-malicious OS from DXEFV too, plus the memory up to
PcdOvmfDecomprScratchEnd.
The delay introduced by the LZMA decompression on S3 resume is negligible.
If -D SMM_REQUIRE is not specified, then PcdSmmSmramRequire remains FALSE
(from the DEC file), and then this patch has no effect (not counting some
changed debug messages).
If QEMU doesn't support S3 (or the user disabled it on the QEMU command
line), then this patch has no effect also.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19037 6f19259b-4bc3-4df7-8a09-765794883524
2015-11-30 19:41:24 +01:00
|
|
|
// Technically we could lay the permanent PEI RAM over SEC's temporary
|
|
|
|
// decompression and scratch buffer even if "secure S3" is needed, since
|
|
|
|
// their lifetimes don't overlap. However, PeiFvInitialization() will cover
|
|
|
|
// RAM up to PcdOvmfDecompressionScratchEnd with an EfiACPIMemoryNVS memory
|
|
|
|
// allocation HOB, and other allocations served from the permanent PEI RAM
|
|
|
|
// shouldn't overlap with that HOB.
|
|
|
|
//
|
2022-12-02 14:10:01 +01:00
|
|
|
MemoryBase = PlatformInfoHob->S3Supported && PlatformInfoHob->SmmSmramRequire ?
|
OvmfPkg: decompress FVs on S3 resume if SMM_REQUIRE is set
If OVMF was built with -D SMM_REQUIRE, that implies that the runtime OS is
not trusted and we should defend against it tampering with the firmware's
data.
One such datum is the PEI firmware volume (PEIFV). Normally PEIFV is
decompressed on the first boot by SEC, then the OS preserves it across S3
suspend-resume cycles; at S3 resume SEC just reuses the originally
decompressed PEIFV.
However, if we don't trust the OS, then SEC must decompress PEIFV from the
pristine flash every time, lest we execute OS-injected code or work with
OS-injected data.
Due to how FVMAIN_COMPACT is organized, we can't decompress just PEIFV;
the decompression brings DXEFV with itself, plus it uses a temporary
output buffer and a scratch buffer too, which even reach above the end of
the finally installed DXEFV. For this reason we must keep away a
non-malicious OS from DXEFV too, plus the memory up to
PcdOvmfDecomprScratchEnd.
The delay introduced by the LZMA decompression on S3 resume is negligible.
If -D SMM_REQUIRE is not specified, then PcdSmmSmramRequire remains FALSE
(from the DEC file), and then this patch has no effect (not counting some
changed debug messages).
If QEMU doesn't support S3 (or the user disabled it on the QEMU command
line), then this patch has no effect also.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19037 6f19259b-4bc3-4df7-8a09-765794883524
2015-11-30 19:41:24 +01:00
|
|
|
PcdGet32 (PcdOvmfDecompressionScratchEnd) :
|
|
|
|
PcdGet32 (PcdOvmfDxeMemFvBase) + PcdGet32 (PcdOvmfDxeMemFvSize);
|
2014-03-04 09:02:16 +01:00
|
|
|
MemorySize = LowerMemorySize - MemoryBase;
|
2015-06-26 18:09:39 +02:00
|
|
|
if (MemorySize > PeiMemoryCap) {
|
|
|
|
MemoryBase = LowerMemorySize - PeiMemoryCap;
|
|
|
|
MemorySize = PeiMemoryCap;
|
2014-03-04 09:02:16 +01:00
|
|
|
}
|
2013-12-08 02:36:07 +01:00
|
|
|
}
|
|
|
|
|
2019-09-20 17:07:43 +02:00
|
|
|
//
|
|
|
|
// MEMFD_BASE_ADDRESS separates the SMRAM at the default SMBASE from the
|
|
|
|
// normal boot permanent PEI RAM. Regarding the S3 boot path, the S3
|
|
|
|
// permanent PEI RAM is located even higher.
|
|
|
|
//
|
2022-12-02 14:10:01 +01:00
|
|
|
if (PlatformInfoHob->SmmSmramRequire && PlatformInfoHob->Q35SmramAtDefaultSmbase) {
|
2019-09-20 17:07:43 +02:00
|
|
|
ASSERT (SMM_DEFAULT_SMBASE + MCH_DEFAULT_SMBASE_SIZE <= MemoryBase);
|
|
|
|
}
|
|
|
|
|
2013-12-08 02:36:07 +01:00
|
|
|
//
|
|
|
|
// Publish this memory to the PEI Core
|
|
|
|
//
|
|
|
|
Status = PublishSystemMemory (MemoryBase, MemorySize);
|
|
|
|
ASSERT_EFI_ERROR (Status);
|
|
|
|
|
|
|
|
return Status;
|
|
|
|
}
|
|
|
|
|
2022-03-07 14:54:30 +01:00
|
|
|
/**
|
|
|
|
Publish system RAM and reserve memory regions
|
|
|
|
|
|
|
|
**/
|
|
|
|
VOID
|
|
|
|
InitializeRamRegions (
|
|
|
|
IN EFI_HOB_PLATFORM_INFO *PlatformInfoHob
|
|
|
|
)
|
|
|
|
{
|
2022-01-20 04:04:17 +01:00
|
|
|
if (TdIsEnabled ()) {
|
|
|
|
PlatformTdxPublishRamRegions ();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-03-07 14:54:30 +01:00
|
|
|
PlatformQemuInitializeRam (PlatformInfoHob);
|
|
|
|
|
|
|
|
SevInitializeRam ();
|
|
|
|
|
|
|
|
PlatformQemuInitializeRamForS3 (PlatformInfoHob);
|
|
|
|
}
|