The header file declares the InitLinuxDescriptorTables() and
SetLinuxDescriptorTables() functions, which are called from "Linux.c" and
implemented in "LinuxGdt.c".
The header file also declares the JumpToKernel() and JumpToUefiKernel()
functions, which are similarly called from "Linux.c". They are implemented
(dependent on architecture) in "Ia32/JumpToKernel.nasm" and
"X64/JumpToKernel.nasm".
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Ref: http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F56327F7D3@ORSMSX113.amr.corp.intel.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Among other things, the header file declares the
InternalMemEncryptSevSetMemoryDecrypted() and
InternalMemEncryptSevSetMemoryEncrypted() functions. The functions are
called from "X64/MemEncryptSevLib.c", and defined in
"X64/VirtualMemory.c".
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Ref: http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F56327F7D3@ORSMSX113.amr.corp.intel.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The header file declares the InternalAcpiGetTimerTick() function. The
function is called from "AcpiTimerLib.c", which is built into all three
library instances. The function is defined individually per library
instance, in "BaseRomAcpiTimerLib.c", "BaseAcpiTimerLib.c", and
"DxeAcpiTimerLib.c" (enumerated in increasing firmware phase order).
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Ref: http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F56327F7D3@ORSMSX113.amr.corp.intel.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The header file declares the AmdSevInstallIoMmuProtocol() function, which
is implemented in "AmdSevIoMmu.c" and called from "IoMmuDxe.c".
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Ref: http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F56327F7D3@ORSMSX113.amr.corp.intel.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Among other things, the header file provides (extern) declarations for the
EFI_FIRMWARE_VOLUME_BLOCK_PROTOCOL member functions that are defined in
"Fvb.c". This way "mEmuVarsFvb.FwVolBlockInstance" can be initialized near
the top of "Fvb.c", ahead of the member function definitions.
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Julien Grall <julien.grall@linaro.org>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Ref: http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F56327F7D3@ORSMSX113.amr.corp.intel.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Among other things, the header file provides (extern) declarations for the
EFI_LEGACY_REGION2_PROTOCOL member functions that are defined in
"LegacyRegion.c". This way "mLegacyRegion2" can be initialized near the
top of "LegacyRegion.c", ahead of the member function definitions.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Ref: http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F56327F7D3@ORSMSX113.amr.corp.intel.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Among other things, the header file provides (extern) declarations for the
EFI_LEGACY_INTERRUPT_PROTOCOL member functions that are defined in
"LegacyInterrupt.c". This way "mLegacyInterrupt" can be initialized near
the top of "LegacyInterrupt.c", ahead of the member function definitions.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Ref: http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F56327F7D3@ORSMSX113.amr.corp.intel.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The header file declares the functions LegacyRegionInit(),
LegacyInterruptInstall() and LegacyBiosPlatformInstall(). They are defined
in "LegacyRegion.c", "LegacyInterrupt.c", and "LegacyPlatform.c",
respectively, and are all called from CsmSupportLibConstructor() in
"CsmSupportLib.c".
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Ref: http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F56327F7D3@ORSMSX113.amr.corp.intel.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Among other things, the header file declares the
"gBlockMmioToBlockIoComponentName" and "gBlockMmioToBlockIoComponentName2"
protocol instance structures. They are defined and initialized in
"ComponentName.c" and installed in "BlockIo.c".
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Ref: http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F56327F7D3@ORSMSX113.amr.corp.intel.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
"QemuLoader.h" defines the command structures of QEMU's ACPI
linker/loader. The client code is in "QemuFwCfgAcpi.c", which is part of
both builds of this driver.
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Julien Grall <julien.grall@linaro.org>
Cc: Phil Dennis-Jordan <phil@philjordan.eu>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Ref: http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F56327F7D3@ORSMSX113.amr.corp.intel.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
We added initial support for QEMU's ACPI linker/loader in commit
a618eaa1f4 ("OvmfPkg: AcpiPlatformDxe: don't rely on unstable QEMU
interface", 2014-06-19). This commit defined the command structures in the
new file "QemuLoader.h", and #included the header in the preexistent
"Qemu.c" file, where the initial command script processing loop was being
implemented.
In commit 14b0faadfc ("OvmfPkg/AcpiPlatformDxe: Split QEMU fw-cfg into a
new file", 2015-02-02), we extracted the -- by then, more advanced --
linker/loader script processing from "Qemu.c" to "QemuFwCfgAcpi.c".
"Qemu.c" was going to need "QemuLoader.h" no longer, but we forgot to make
the #include directive unique to the new "QemuFwCfgAcpi.c" file. Do it
now.
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Julien Grall <julien.grall@linaro.org>
Cc: Phil Dennis-Jordan <phil@philjordan.eu>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Among other things, the header file declares InstallAcpiTables(). This
function is called from AcpiPlatformEntryPoint() -- the entry point of
both INF files, defined in the common "EntryPoint.c" file --, and it is
defined (dependent on INF file) in "AcpiPlatform.c" or
"QemuFwCfgAcpiPlatform.c".
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Julien Grall <julien.grall@linaro.org>
Cc: Phil Dennis-Jordan <phil@philjordan.eu>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Ref: http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F56327F7D3@ORSMSX113.amr.corp.intel.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
This makes it easier to insert future source files, and to see the source
files that are unique to either INF file. No functional changes.
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Julien Grall <julien.grall@linaro.org>
Cc: Phil Dennis-Jordan <phil@philjordan.eu>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
- the @file comment block should match between INF and main C file
- rewrap / refill columns to 79 characters
- insert space before opening paren
- prefix and suffix //-style comment block with empty // lines
- fix indentation of arguments in multi-line function call
- general tab spacing (indent step) is 2 in edk2, unlike QEMU's 4
No functional changes.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Commit:24e4ad7 (OvmfPkg: Add AmdSevDxe driver) added a driver which runs
early in DXE phase and clears the C-bit from NonExistent entry -- which
is later split and accommodate the flash MMIO. When SMM is enabled, we
build two sets of page tables; first page table is used when executing
code in non SMM mode (SMM-less-pgtable) and second page table is used
when we are executing code in SMM mode (SMM-pgtable).
During boot time, AmdSevDxe driver clears the C-bit from the
SMM-less-pgtable. But when SMM is enabled, Qemu Flash services are used
from SMM mode.
In this patch we explicitly clear the C-bit from Qemu flash MMIO range
before we probe the flash. When OVMF is built with SMM_REQUIRE then
call to initialize the flash services happen after the SMM-pgtable is
created and processor has served the first SMI. At this time we will
have access to the SMM-pgtable.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[lersek@redhat.com: trivial coding style improvements]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
The library registers a security management handler, to measure images
that are not measure in PEI phase. For example with the qemu PXE rom:
Loading driver at 0x0003E6C2000 EntryPoint=0x0003E6C9076 8086100e.efi
And the following binary_bios_measurements log entry seems to be
added:
PCR: 2 type: EV_EFI_BOOT_SERVICES_DRIVER size: 0x4e digest: 70a22475e9f18806d2ed9193b48d80d26779d9a4
The following order of operations ensures that 3rd party UEFI modules,
such as PCI option ROMs and other modules possibly loaded from outside
of firmware volumes, are measured into the TPM:
(1) Tcg2Dxe is included in DXEFV, therefore it produces the TCG2
protocol sometime in the DXE phase (assuming a TPM2 chip is present,
reported via PcdTpmInstanceGuid).
(2) The DXE core finds that no more drivers are left to dispatch from
DXEFV, and we enter the BDS phase.
(3) OVMF's PlatformBootManagerLib connects all PCI root bridges
non-recursively, producing PciIo instances and discovering PCI
oproms.
(4) The dispatching of images that don't originate from FVs is deferred
at this point, by
"MdeModulePkg/Universal/SecurityStubDxe/Defer3rdPartyImageLoad.c".
(5) OVMF's PlatformBootManagerLib signals EndOfDxe.
(6) OVMF's PlatformBootManagerLib calls
EfiBootManagerDispatchDeferredImages() -- the images deferred in
step (4) are now dispatched.
(7) Image dispatch invokes the Security / Security2 Arch protocols
(produced by SecurityStubDxe). In this patch, we hook
DxeTpm2MeasureBootLib into SecurityStubDxe, therefore image dispatch
will try to locate the TCG2 protocol, and measure the image into the
TPM2 chip with the protocol. Because of step (1), the TCG2 protocol
will always be found and used (assuming a TPM2 chip is present).
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
This module measures and log the boot environment. It also produces
the Tcg2 protocol, which allows for example to read the log from OS.
The linux kernel doesn't yet read the EFI_TCG2_EVENT_LOG_FORMAT_TCG_2,
which is required for crypto-agile log. In fact, only upcoming 4.16
adds support EFI_TCG2_EVENT_LOG_FORMAT_TCG_1_2:
[ 0.000000] efi: EFI v2.70 by EDK II
[ 0.000000] efi: SMBIOS=0x3fa1f000 ACPI=0x3fbb6000 ACPI 2.0=0x3fbb6014 MEMATTR=0x3e7d4318 TPMEventLog=0x3db21018
$ python chipsec_util.py tpm parse_log binary_bios_measurements
[CHIPSEC] Version 1.3.5.dev2
[CHIPSEC] API mode: using OS native API (not using CHIPSEC kernel module)
[CHIPSEC] Executing command 'tpm' with args ['parse_log', '/tmp/binary_bios_measurements']
PCR: 0 type: EV_S_CRTM_VERSION size: 0x2 digest: 1489f923c4dca729178b3e3233458550d8dddf29
+ version:
PCR: 0 type: EV_EFI_PLATFORM_FIRMWARE_BLOB size: 0x10 digest: fd39ced7c0d2a61f6830c78c7625f94826b05bcc
+ base: 0x820000 length: 0xe0000
PCR: 0 type: EV_EFI_PLATFORM_FIRMWARE_BLOB size: 0x10 digest: 39ebc6783b72bc1e73c7d5bcfeb5f54a3f105d4c
+ base: 0x900000 length: 0xa00000
PCR: 7 type: EV_EFI_VARIABLE_DRIVER_CONFIG size: 0x35 digest: 57cd4dc19442475aa82743484f3b1caa88e142b8
PCR: 7 type: EV_EFI_VARIABLE_DRIVER_CONFIG size: 0x24 digest: 9b1387306ebb7ff8e795e7be77563666bbf4516e
PCR: 7 type: EV_EFI_VARIABLE_DRIVER_CONFIG size: 0x26 digest: 9afa86c507419b8570c62167cb9486d9fc809758
PCR: 7 type: EV_EFI_VARIABLE_DRIVER_CONFIG size: 0x24 digest: 5bf8faa078d40ffbd03317c93398b01229a0e1e0
PCR: 7 type: EV_EFI_VARIABLE_DRIVER_CONFIG size: 0x26 digest: 734424c9fe8fc71716c42096f4b74c88733b175e
PCR: 7 type: EV_SEPARATOR size: 0x4 digest: 9069ca78e7450a285173431b3e52c5c25299e473
PCR: 1 type: EV_EFI_VARIABLE_BOOT size: 0x3e digest: 252f8ebb85340290b64f4b06a001742be8e5cab6
PCR: 1 type: EV_EFI_VARIABLE_BOOT size: 0x6e digest: 22a4f6ee9af6dba01d3528deb64b74b582fc182b
PCR: 1 type: EV_EFI_VARIABLE_BOOT size: 0x80 digest: b7811d5bf30a7efd4e385c6179fe10d9290bb9e8
PCR: 1 type: EV_EFI_VARIABLE_BOOT size: 0x84 digest: 425e502c24fc924e231e0a62327b6b7d1f704573
PCR: 1 type: EV_EFI_VARIABLE_BOOT size: 0x9a digest: 0b5d2c98ac5de6148a4a1490ff9d5df69039f04e
PCR: 1 type: EV_EFI_VARIABLE_BOOT size: 0xbd digest: 20bd5f402271d57a88ea314fe35c1705956b1f74
PCR: 1 type: EV_EFI_VARIABLE_BOOT size: 0x88 digest: df5d6605cb8f4366d745a8464cfb26c1efdc305c
PCR: 4 type: EV_EFI_ACTION size: 0x28 digest: cd0fdb4531a6ec41be2753ba042637d6e5f7f256
PCR: 0 type: EV_SEPARATOR size: 0x4 digest: 9069ca78e7450a285173431b3e52c5c25299e473
PCR: 1 type: EV_SEPARATOR size: 0x4 digest: 9069ca78e7450a285173431b3e52c5c25299e473
PCR: 2 type: EV_SEPARATOR size: 0x4 digest: 9069ca78e7450a285173431b3e52c5c25299e473
PCR: 3 type: EV_SEPARATOR size: 0x4 digest: 9069ca78e7450a285173431b3e52c5c25299e473
PCR: 4 type: EV_SEPARATOR size: 0x4 digest: 9069ca78e7450a285173431b3e52c5c25299e473
PCR: 5 type: EV_SEPARATOR size: 0x4 digest: 9069ca78e7450a285173431b3e52c5c25299e473
$ tpm2_pcrlist
sha1 :
0 : 35bd1786b6909daad610d7598b1d620352d33b8a
1 : ec0511e860206e0af13c31da2f9e943fb6ca353d
2 : b2a83b0ebf2f8374299a5b2bdfc31ea955ad7236
3 : b2a83b0ebf2f8374299a5b2bdfc31ea955ad7236
4 : 45a323382bd933f08e7f0e256bc8249e4095b1ec
5 : d16d7e629fd8d08ca256f9ad3a3a1587c9e6cc1b
6 : b2a83b0ebf2f8374299a5b2bdfc31ea955ad7236
7 : 518bd167271fbb64589c61e43d8c0165861431d8
8 : 0000000000000000000000000000000000000000
9 : 0000000000000000000000000000000000000000
10 : 0000000000000000000000000000000000000000
11 : 0000000000000000000000000000000000000000
12 : 0000000000000000000000000000000000000000
13 : 0000000000000000000000000000000000000000
14 : 0000000000000000000000000000000000000000
15 : 0000000000000000000000000000000000000000
16 : 0000000000000000000000000000000000000000
17 : ffffffffffffffffffffffffffffffffffffffff
18 : ffffffffffffffffffffffffffffffffffffffff
19 : ffffffffffffffffffffffffffffffffffffffff
20 : ffffffffffffffffffffffffffffffffffffffff
21 : ffffffffffffffffffffffffffffffffffffffff
22 : ffffffffffffffffffffffffffffffffffffffff
23 : 0000000000000000000000000000000000000000
sha256 :
0 : 9ae903dbae3357ac00d223660bac19ea5c021499a56201104332ab966631ce2c
1 : acc611d90245cf04e77b0ca94901f90e7fa54770f0426f53c3049b532243d1b8
2 : 3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969
3 : 3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969
4 : 7a94ffe8a7729a566d3d3c577fcb4b6b1e671f31540375f80eae6382ab785e35
5 : a5ceb755d043f32431d63e39f5161464620a3437280494b5850dc1b47cc074e0
6 : 3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969
7 : 65caf8dd1e0ea7a6347b635d2b379c93b9a1351edc2afc3ecda700e534eb3068
8 : 0000000000000000000000000000000000000000000000000000000000000000
9 : 0000000000000000000000000000000000000000000000000000000000000000
10 : 0000000000000000000000000000000000000000000000000000000000000000
11 : 0000000000000000000000000000000000000000000000000000000000000000
12 : 0000000000000000000000000000000000000000000000000000000000000000
13 : 0000000000000000000000000000000000000000000000000000000000000000
14 : 0000000000000000000000000000000000000000000000000000000000000000
15 : 0000000000000000000000000000000000000000000000000000000000000000
16 : 0000000000000000000000000000000000000000000000000000000000000000
17 : ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
18 : ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
19 : ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
20 : ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
21 : ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
22 : ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
23 : 0000000000000000000000000000000000000000000000000000000000000000
sha384 :
The PhysicalPresenceLib is required, it sets some variables, but the
firmware doesn't act on it yet.
Laszlo Ersek explained on the list why Tpm2DeviceLib has to be
resolved differently for DXE_DRIVER modules in general and for
"Tcg2Dxe.inf" specifically:
* We have a library class called Tpm2DeviceLib -- this is basically the
set of APIs declared in "SecurityPkg/Include/Library/Tpm2DeviceLib.h".
Its leading comment says "This library abstract how to access TPM2
hardware device".
There are two *sets* of APIs in "Tpm2DeviceLib.h":
(a) functions that deal with the TPM2 device:
- Tpm2RequestUseTpm(),
- Tpm2SubmitCommand()
This set of APIs is supposed to be used by clients that *consume*
the TPM2 device abstraction.
(b) the function Tpm2RegisterTpm2DeviceLib(), which is supposed to be
used by *providers* of various TPM2 device abstractions.
* Then, we have two implementations (instances) of the Tpm2DeviceLib class:
(1) SecurityPkg/Library/Tpm2DeviceLibTcg2/Tpm2DeviceLibTcg2.inf
(2) SecurityPkg/Library/Tpm2DeviceLibRouter/Tpm2DeviceLibRouterDxe.inf
(1) The first library instance ("Tpm2DeviceLibTcg2.inf") implements the
APIs listed under (a), and it does not implement (b) -- see
EFI_UNSUPPORTED. In other words, this lib instance is strictly meant for
drivers that *consume* the TPM2 device abstraction. And, the (a) group
of APIs is implemented by forwarding the requests to the TCG2 protocol.
The idea here is that all the drivers that consume the TPM2 abstraction
do not have to be statically linked with a large TPM2 device library
instance; instead they are only linked (statically) with this "thin"
library instance, and all the actual work is delegated to whichever
driver that provides the singleton TCG2 protocol.
(2) The second library instance ("Tpm2DeviceLibRouterDxe.inf") is meant
for the driver that offers (produces) the TCG2 protocol. This lib
instance implements both (a) and (b) API groups.
* Here's how things fit together:
(i) The "SecurityPkg/Library/Tpm2DeviceLibDTpm/Tpm2InstanceLibDTpm.inf"
library instance (which has no lib class) is linked into "Tcg2Dxe.inf"
via NULL class resolution. This simply means that before the
"Tcg2Dxe.inf" entry point function is entered, the constructor function
of "Tpm2InstanceLibDTpm.inf" will be called.
(ii) This Tpm2InstanceLibDTpmConstructor() function calls API (b), and
registers its own actual TPM2 command implementation with the
"Tpm2DeviceLibRouter" library instance (also linked into the Tcg2Dxe
driver). This provides the back-end for the API set (a).
TCG2 protocol provider (Tcg2Dxe.inf driver) launches
|
v
NULL class: Tpm2InstanceLibDTpm instance construction
|
v
Tpm2DeviceLib class: Tpm2DeviceLibRouter instance
backend registration for API set (a)
(iii) The Tcg2Dxe driver exposes the TCG2 protocol.
(iv) A TPM2 consumer calls API set (a) via lib instance (1). Such calls
land in Tcg2Dxe, via the protocol.
(v) Tcg2Dxe serves the protocol request by forwarding it to API set (a)
from lib instance (2).
(vi) Those functions call the "backend" functions registered by
Tpm2DeviceLibDTpm in step (ii).
TPM 2 consumer driver
|
v
Tpm2DeviceLib class: Tpm2DeviceLibTcg2 instance
|
v
TCG2 protocol interface
|
v
TCG2 protocol provider: Tcg2Dxe.inf driver
|
v
Tpm2DeviceLib class: Tpm2DeviceLibRouter instance
|
v
NULL class: Tpm2InstanceLibDTpm instance
(via earlier registration)
|
v
TPM2 chip (actual hardware)
* So that is the "router" pattern in edk2. Namely,
- Consumers of an abstraction use a thin library instance.
- The thin library instance calls a firmware-global (singleton) service,
i.e. a PPI (in the PEI phase) or protocol (in the DXE phase).
- The PEIM providing the PPI, or the DXE driver providing the protocol,
don't themselves implement the actual service either. Instead they
offer a "registration" service too, and they only connect the incoming
"consumer" calls to the earlier registered back-end(s).
- The "registration service", for back-ends to use, may take various
forms.
It can be exposed globally to the rest of the firmware, as
another member function of the PPI / protocol structure. Then backends
can be provided by separate PEIMs / DXE drivers.
Or else, the registration service can be exposed as just another
library API. In this case, the backends are provided as NULL class
library instances, and a platform DSC file links them into the PEIM /
DXE driver via NULL class resolutions. The backend lib instances call
the registration service in their own respective constructor
functions.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
This module will initialize TPM device, measure reported FVs and BIOS
version. We keep both SHA-1 and SHA-256 for the TCG 1.2 log format
compatibility, but the SHA-256 measurements and TCG 2 log format are
now recommended.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
The Tcg2ConfigPei module informs the firmware globally about the TPM
device type, by setting the PcdTpmInstanceGuid PCD to the appropriate
GUID value. The original module under SecurityPkg can perform device
detection, or read a cached value from a non-volatile UEFI variable.
OvmfPkg's clone of the module only performs the TPM2 hardware detection.
This is what the module does:
- Check the QEMU hardware for TPM2 availability only
- If found, set the dynamic PCD "PcdTpmInstanceGuid" to
&gEfiTpmDeviceInstanceTpm20DtpmGuid. This is what informs the rest of
the firmware about the TPM type.
- Install the gEfiTpmDeviceSelectedGuid PPI. This action permits the
PEI_CORE to dispatch the Tcg2Pei module, which consumes the above PCD.
In effect, the gEfiTpmDeviceSelectedGuid PPI serializes the setting
and the consumption of the "TPM type" PCD.
- If no TPM2 was found, install gPeiTpmInitializationDonePpiGuid.
(Normally this is performed by Tcg2Pei, but Tcg2Pei doesn't do it if
no TPM2 is available. So in that case our Tcg2ConfigPei must do it.)
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Based on the following patch from Brijesh Singh <brijesh.singh@amd.com>:
[PATCH v2 1/2] OvmfPkg/AmdSevDxe: Clear the C-bit from SMM Saved State
http://mid.mail-archive.com/20180228161415.28723-2-brijesh.singh@amd.comhttps://lists.01.org/pipermail/edk2-devel/2018-February/022016.html
Original commit message from Brijesh:
> When OVMF is built with SMM, SMMSaved State area (SMM_DEFAULT_SMBASE +
> SMRAM_SAVE_STATE_MAP_OFFSET) contains data which need to be accessed by
> both guest and hypervisor. Since the data need to be accessed by both
> hence we must map the SMMSaved State area as unencrypted (i.e C-bit
> cleared).
>
> This patch clears the SavedStateArea address before SMBASE relocation.
> Currently, we do not clear the SavedStateArea address after SMBASE is
> relocated due to the following reasons:
>
> 1) Guest BIOS never access the relocated SavedStateArea.
>
> 2) The C-bit works on page-aligned address, but the SavedStateArea
> address is not a page-aligned. Theoretically, we could roundup the
> address and clear the C-bit of aligned address but looking carefully we
> found that some portion of the page contains code -- which will causes a
> bigger issue for the SEV guest. When SEV is enabled, all the code must
> be encrypted otherwise hardware will cause trap.
Changes by Laszlo:
- separate AmdSevDxe bits from SmmCpuFeaturesLib bits;
- spell out PcdLib dependency with #include and in LibraryClasses;
- replace (SMM_DEFAULT_SMBASE + SMRAM_SAVE_STATE_MAP_OFFSET) calculation
with call to new MemEncryptSevLocateInitialSmramSaveStateMapPages()
function;
- consequently, pass page-aligned BaseAddress to
MemEncryptSevClearPageEncMask();
- zero the pages before clearing the C-bit;
- pass Flush=TRUE to MemEncryptSevClearPageEncMask();
- harden the treatment of MemEncryptSevClearPageEncMask() failure.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Based on the following patch from Brijesh Singh <brijesh.singh@amd.com>:
[PATCH v2 1/2] OvmfPkg/AmdSevDxe: Clear the C-bit from SMM Saved State
http://mid.mail-archive.com/20180228161415.28723-2-brijesh.singh@amd.comhttps://lists.01.org/pipermail/edk2-devel/2018-February/022016.html
Once PiSmmCpuDxeSmm relocates SMBASE for all VCPUs, the pages of the
initial SMRAM save state map can be re-encrypted (including zeroing them
out after setting the C-bit on them), and they can be released to DXE for
general use (undoing the allocation that we did in PlatformPei's
AmdSevInitialize() function).
The decryption of the same pages (which will occur chronologically
earlier) is implemented in the next patch; hence the "re-encryption" part
of this patch is currently a no-op. The series is structured like this in
order to be bisection-friendly. If the decryption patch preceded this
patch, then an info leak would be created while standing between the
patches.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
In the next two patches, we'll temporarily decrypt the pages containing
the initial SMRAM save state map, for SMBASE relocation. (Unlike the
separate, relocated SMRAM save state map of each VCPU, the original,
shared map behaves similarly to a "common buffer" between guest and host.)
The decryption will occur near the beginning of the DXE phase, in
AmdSevDxe, and the re-encryption will occur in PiSmmCpuDxeSmm, via OVMF's
SmmCpuFeaturesLib instance.
There is a non-trivial time gap between these two points, and the DXE
phase might use the pages overlapping the initial SMRAM save state map for
arbitrary purposes meanwhile. In order to prevent any information leak
towards the hypervisor, make sure the DXE phase puts nothing in those
pages until re-encryption is done.
Creating a memalloc HOB for the area in question is safe:
- the temporary SEC/PEI RAM (stack and heap) is based at
PcdOvmfSecPeiTempRamBase, which is above 8MB,
- the permanent PEI RAM (installed in PlatformPei's PublishPeiMemory()
function) never starts below PcdOvmfDxeMemFvBase, which is also above
8MB.
The allocated pages can be released to the DXE phase after SMBASE
relocation and re-encryption are complete.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
In the next three patches, we're going to modify three modules under
OvmfPkg. When OVMF is built with -D SMM_REQUIRE and runs in an SEV guest,
each affected module will have to know the page range that covers the
initial (pre-SMBASE relocation) SMRAM save state map. Add a helper
function to MemEncryptSevLib that calculates the "base address" and
"number of pages" constants for this page range.
(In a RELEASE build -- i.e., with assertions disabled and optimization
enabled --, the helper function can be compiled to store two constants
determined at compile time.)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
List those and only those libraries that are used.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
There are many overlong lines; it's hard to work with the module like
this. Rewrap all files to 79 columns.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
In edk2, the "static" keyword is spelled "STATIC". Also let "STATIC" stand
alone on a line in function definitions.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
The declaration and the definition(s) of the function should have
identical leading comments and/or identical parameter lists. Document the
"Cr3BaseAddress" parameter, and correct several parameter references.
Replace a "clear" reference to the C-bit with a "set" reference.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
The declaration and the definition(s) of the function should have
identical leading comments and/or identical parameter lists. Document the
"Cr3BaseAddress" parameter, and correct several parameter references.
Replace a "set" reference to the C-bit with a "clear" reference.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
The declaration and the definition(s) of the function should have
identical leading comments and/or identical parameter lists. Replace any
leftover "clear" references to the C-bit with "set" references. Also
remove any excess space in the comment block, and unindent the trailing
"**/" if necessary. Correct several parameter references.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
The declaration and the definition(s) of the function should have
identical leading comments and/or identical parameter lists. Also remove
any excess space in the comment block, and unindent the trailing "**/" if
necessary. Correct several parameter references.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
The declaration and the definition(s) of the function should have
identical leading comments and/or identical parameter lists. Also remove
any excess space in the comment block, and unindent the trailing "**/" if
necessary.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
There are many overlong lines; it's hard to work with the library like
this. Rewrap all files to 79 columns.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
These are listed under "ShellPkg/Application/Shell/Shell.inf", but they
have been commented out ever since commit 345a0c8fce ("OvmfPkg: Add
support for UEFI shell", 2011-06-26). No such lib classes exist in edk2.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jaben Carsey <jaben.carsey@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=800
Based on content from the following branch/commits:
https://github.com/Microsoft/MS_UEFI/tree/share/MsCapsuleSupport33bab4031aca516b1a612b9f111f2e
The BootGraphicsResourceTableDxe module uses the BmpSupportLib
and SafeIntLib to convert a GOP BLT buffer to a BMP graphics image.
Add library mappings for these new library classes.
Cc: Sean Brogan <sean.brogan@microsoft.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Bret Barkelew <Bret.Barkelew@microsoft.com>
"Platform.h" declares the AmdSevInitialize() function without EFIAPI, but
the definition in "AmdSev.c" includes EFIAPI.
GCC toolchains without LTO do not catch this error because "AmdSev.c" does
not include "Platform.h"; i.e. the declaration used by callers such as
"Platform.c" is not actually matched against the function definition at
build time.
With LTO enabled, the mismatch is found -- however, as a warning only, due
to commit f8d0b96629 ("BaseTools GCC5: disable warnings-as-errors for
now", 2016-08-03).
Include the header in the C file (which turns the issue into a hard build
error on all GCC toolchains), plus sync the declaration from the header
file to the C file.
There's been no functional breakage because AmdSevInitialize() takes no
parameters.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Fixes: 13b5d743c8
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Jaben Carsey <jaben.carsey@intel.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Ovmf appended option -mno-mmx -mno-sse, but these two options were enabled
in Openssl. The compiler option becomes -mmmx ?msse -mno-mmx -mno-sse. It
trig mac clang compiler hang when compile one source file in openssl.
This issue is found when SECURE_BOOT_ENABLE is TRUE. This may be the compiler
issue. To work around it, don't add these two options for XCODE5 tool chain.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Commit 2ac1730bf2 (MdeModulePkg/DxeIpl: Mark page table as read-only)
sets the memory pages used for page table as read-only after paging is
setup and sets CR0.WP to protect CPU modifying the read-only pages.
The commit causes #PF when MemEncryptSevClearPageEncMask() or
MemEncryptSevSetPageEncMask() tries to change the page-table attributes.
This patch takes the similar approach as Commit 147fd35c3e
(UefiCpuPkg/CpuDxe: Enable protection for newly added page table).
When page table protection is enabled, we disable it temporarily before
changing the page table attributes.
This patch makes use of the same approach as Commit 2ac1730bf2
(MdeModulePkg/DxeIpl: Mark page table as read-only)) for allocating
page table memory from reserved memory pool, which helps to reduce a
potential "split" operation.
The patch duplicates code from commit 147fd35c3e. The code duplication
will be removed after we implement page table manipulation library. See
bugzilla https://bugzilla.tianocore.org/show_bug.cgi?id=847.
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
The TFTP command was converted from a NULL class library instance
to a dynamic shell command in commit 0961002352.
This patch complements commit f9bc2f8763, which only removed the
old library, but didn't add the new dynamic command。
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>
Original code breaks a single assembly code to multiple lines.
But, when VS CL.exe preprocesses the FixedPcdGet32() macro
invocation to the replacement text, it loses '\', and causes
NASM to fail.
Changing the multiple lines to one line to resolve the build failure.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
This means that SetBootOrderFromQemu() will preserve all UEFI boot options
matched by any given OFW devpath, such as PXEv4, HTTPv4, PXEv6 and HTTPv6
boot options for the same NIC. Currently we stop the matching / appending
for the OFW devpath coming from the outer loop whenever we find the first
UEFI boot option match in the inner loop.
(The previous patch was about multiple OFW devpaths matching a single UEFI
boot option (which should never happen). This patch is about a single OFW
devpath matching multiple UEFI boot options. With the "break" statement
removed here, the small optimization from the last patch becomes a bit
more relevant, because now the inner loop always counts up to
ActiveCount.)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The SetBootOrderFromQemu() function implements a nested loop where
- the outer loop iterates over all OpenFirmware (OFW) device paths in the
QEMU boot order, and translates each to a UEFI device path prefix;
- the inner loop matches the current (translated) prefix against all
active UEFI boot options in turn;
- if the UEFI boot option is matched by the translated prefix, the UEFI
boot option is appended to the "new" UEFI boot order, and marked as
"has been appended".
This patch adds a micro-optimization where already matched / appended UEFI
boot options are skipped in the inner loop. This is not a functional
change. A functional change would be if, as a consequence of the patch,
some UEFI boot options would no longer be *doubly* matched.
For a UEFI boot option to be matched by two translated prefixes, one of
those prefixes would have to be a (proper, or equal) prefix of the other
prefix. The PCI and MMIO OFW translation routines output such only in the
following cases:
- When the original OFW device paths are prefixes of each other. This is
not possible from the QEMU side. (Only leaf devices are bootable.)
- When the translation rules in the routines are incomplete, and don't
look at the OFW device paths for sufficient length (i.e., at nodes where
they would already differ, and the difference would show up in the
translation output).
This would be a shortcoming of the translation routines and should be
fixed in TranslatePciOfwNodes() and TranslateMmioOfwNodes(), whenever
identified.
Even in the second case, this patch would replace the double appending of
a single UEFI boot option (matched by two different OFW device paths) with
a correct, or cross-, matching of two different UEFI boot options. Again,
this is not expected, but arguably it would be more correct than duplicate
boot option appending, should it occur due to any (unexpected, unknown)
lack of detail in the translation routines.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
When SEV is enabled, every debug message printed by OVMF to the
QEMU debug port traps from the guest to QEMU character by character
because "REP OUTSB" cannot be used by IoWriteFifo8. Furthermore,
when OVMF is built with the DEBUG_VERBOSE bit (value 0x00400000)
enabled in "gEfiMdePkgTokenSpaceGuid.PcdDebugPrintErrorLevel", then the
OvmfPkg/IoMmuDxe driver, and the OvmfPkg/Library/BaseMemEncryptSevLib
library instance that is built into it, produce a huge amount of
log messages. Therefore, in SEV guests, the boot time impact is huge
(about 45 seconds _additional_ time spent writing to the debug port).
While these messages are very useful for analyzing guest behavior,
most of the time the user won't be capturing the OVMF debug log.
In fact libvirt does not provide a method for configuring log capture;
users that wish to do this (or are instructed to do this) have to resort
to <qemu:arg>.
The debug console device provides a handy detection mechanism; when read,
it returns 0xE9 (which is very much unlike the 0xFF that is returned by
an unused port). Use it to skip the possibly expensive OUT instructions
when the debug I/O port isn't plugged anywhere.
For SEC, the debug port has to be read before each full message.
However:
- if the debug port is available, then reading one byte before writing
a full message isn't tragic, especially because SEC doesn't print many
messages
- if the debug port is not available, then reading one byte instead of
writing a full message is still a win.
Contributed-under: TianoCore Contribution Agreement 1.0
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen (Intel address) <jordan.l.justen@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
The next patch will want to add a global variable to
PlatformDebugLibIoPort, but this is not suitable for the SEC
phase, because SEC runs from read-only flash. The solution is
to have two library instances, one for SEC and another
for all other firmware phases. This patch adds the "plumbing"
for the SEC library instance, separating the INF files and
moving the constructor to a separate C source file.
Contributed-under: TianoCore Contribution Agreement 1.1
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen (Intel address) <jordan.l.justen@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Remove Uefi.h, which includes UefiSpec.h, and change the
return value to match the RETURN_STATUS type.
Contributed-under: TianoCore Contribution Agreement 1.1
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen (Intel address) <jordan.l.justen@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
(1) In the PEI phase, the PCD database is maintained in a GUID HOB. In
OVMF, we load the PCD PEIM before any other PEIMs (using APRIORI PEI),
so that all other PEIMs can use dynamic PCDs. Consequently,
- the PCD GUID HOB is initially allocated from the temporary SEC/PEI
heap,
- whenever we introduce a dynamic PCD to a PEIM built into OVMF such
that the PCD is new to OVMF's whole PEI phase, the PCD GUID HOB (and
its temporary heap footprint) grow.
I've noticed that, if we add just one more dynamic PCD to the PEI
phase, then in the X64 build,
- we get very close to the half of the temporary heap (i.e., 8192
bytes),
- obscure PEI phase hangs or DXE core initialization failures
(ASSERTs) occur. The symptoms vary between the FD_SIZE_2MB and
FD_SIZE_4MB builds of X64 OVMF.
(2) I've found that commit
2bbd7e2fbd ("UefiCpuPkg/MtrrLib: Update algorithm to calculate
optimal settings", 2017-09-27)
introduced a large (16KB) stack allocation:
> The patch changes existing MtrrSetMemoryAttributeInMtrrSettings() and
> MtrrSetMemoryAttribute() to use the 4-page stack buffer for calculation.
> ...
> +#define SCRATCH_BUFFER_SIZE (4 * SIZE_4KB)
> ...
> @@ -2207,17 +2462,66 @@ MtrrSetMemoryAttributeInMtrrSettings (
> ...
> + UINT8 Scratch[SCRATCH_BUFFER_SIZE];
(3) OVMF's temp SEC/PEI RAM size has been 32KB ever since commit
7cb6b0e068 ("OvmfPkg: Move SEC/PEI Temporary RAM from 0x70000 to
0x810000", 2014-01-21)
Of that, the upper 16KB half is stack (growing down), and the lower
16KB half is heap.
Thus, OvmfPkg/PlatformPei's calls to "UefiCpuPkg/Library/MtrrLib", in
QemuInitializeRam(), cause the Scratch array to overflow the entire
stack (heading towards lower addresses), and corrupt the heap below
the stack. It turns out that the total stack demand is about 24KB, so
the overflow is able to corrupt the upper 8KB of the heap. If that
part of the heap is actually used (for example because we grow the PCD
GUID HOB sufficiently), mayhem ensues.
(4) Right after commit 7cb6b0e068 (see above), there would be no room
left above the 32KB temp SEC/PEI RAM. However, given more recent
commits
45d8708151 ("OvmfPkg/PlatformPei: rebase and resize the permanent
PEI memory for S3", 2016-07-13)
6b04cca4d6 ("OvmfPkg: remove PcdS3AcpiReservedMemoryBase,
PcdS3AcpiReservedMemorySize", 2016-07-12)
we can now restore the temp SEC/PEI RAM size to the original
(pre-7cb6b0e06809) 64KB. This will allow for a 32KB temp SEC/PEI
stack, which accommodates the ~24KB demand mentioned in (3).
(Prior patches in this series will let us monitor the stack usage in
the future.)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=747
Ref: http://mid.mail-archive.com/a49cc089-12ae-a887-a4d6-4dc509233a74@redhat.com
Ref: http://mid.mail-archive.com/03e369bb-77c4-0134-258f-bdae62cbc8c5@redhat.com
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
XenHypercallLib uses the 'hvc' instruction, which is not implemented
on all ARMv7 CPUs, and so we need to explicitly specify a CPU that
has the virtualization extensions.
This override used to be set at the platform level, but this was removed
in commit 0d36a219c7
('ArmPlatformPkg/PL031RealTimeClockLib: drop ArmPlatformSysConfigLib
reference), under the assumption that all users of the 'hvc' instruction
had already been fixed.
So fix this for GNU binutils by adding the 'virt' arch extension
directive, and for RVCT by setting the --cpu command line option to a
CPU that is virt capable.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
I missed the following, both while reviewing and while testing commit
6041ac65ae ("OvmfPkg/PlatformPei: DENY_EXECUTE_ON_SECURITY_VIOLATION
when SEV is active", 2017-10-05):
If "-D SECURE_BOOT_ENABLE" is not passed on the "build" command line, then
OVMF has no dynamic default at all for
"PcdOptionRomImageVerificationPolicy". This means that the PcdSet32S()
call added in the subject commit doesn't even compile:
> OvmfPkg/PlatformPei/AmdSev.c: In function 'AmdSevInitialize':
> OvmfPkg/PlatformPei/AmdSev.c:67:3: error: implicit declaration of
> function '_PCD_SET_MODE_32_S_PcdOptionRomImageVerificationPolicy'
> [-Werror=implicit-function-declaration]
> PcdStatus = PcdSet32S (PcdOptionRomImageVerificationPolicy, 0x4);
> ^
> cc1: all warnings being treated as errors
Make the current, SB-only, 0x00 dynamic default unconditional.
This is the simplest approach, and it reflects the intent of original
commit 1fea9ddb4e ("OvmfPkg: execute option ROM images regardless of
Secure Boot", 2016-01-07). Without SECURE_BOOT_ENABLE,
"SecurityPkg/Library/DxeImageVerificationLib" is not used anyway, so the
PCD is never read.
This issue was first caught and reported by Gerd Hoffmann
<kraxel@redhat.com>'s Jenkins CI. Later it was also reported in
<https://bugzilla.tianocore.org/show_bug.cgi?id=737>.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Fixes: 6041ac65ae
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
[lersek@redhat.com: trim commit message as suggested by Jordan]
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
[lersek@redhat.com: add reference to TianoCore BZ#737]
The following commit:
1fea9ddb4e OvmfPkg: execute option ROM images regardless of Secure Boot
sets the OptionRomImageVerificationPolicy to ALWAYS_EXECUTE the expansion
ROMs attached to the emulated PCI devices. A expansion ROM constitute
another channel through which a cloud provider (i.e hypervisor) can
inject a code in guest boot flow to compromise it.
When SEV is enabled, the bios code has been verified by the guest owner
via the SEV guest launch sequence before its executed. When secure boot,
is enabled, lets make sure that we do not allow guest bios to execute a
code which is not signed by the guest owner.
Fixes: https://bugzilla.tianocore.org/show_bug.cgi?id=728
Cc: Chao Zhang <chao.b.zhang@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
QemuVideoDxe driver will link VBE SHIM into page 0. If NULL pointer
detection is enabled, this driver will fail to load. NULL pointer detection
bypassing code is added to prevent such problem during boot.
Please note that Windows 7 will try to access VBE SHIM during boot if it's
installed, and then cause boot failure. This can be fixed by setting BIT7
of PcdNullPointerDetectionPropertyMask to disable NULL pointer detection
after EndOfDxe. As far as we know, there's no other OSs has such issue.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Parse QEMU_PCI_BRIDGE_CAPABILITY_RESOURCE_RESERVATION from the bridges'
conventional config spaces. Translate the fields as follows:
* BusNumbers:
* 0 -- no reservation;
* (-1) -- firmware default, i.e. no reservation;
* otherwise -- reserve the requested value. (NB, bus number reservation
is not supposed to work before
<https://bugzilla.tianocore.org/show_bug.cgi?id=656> is fixed.)
* Io:
* 0 -- no reservation;
* (-1) -- keep our current default (512B);
* otherwise -- round up the requested value and reserve that.
* NonPrefetchable32BitMmio:
* 0 -- no reservation;
* (-1) -- keep our current default (2MB);
* otherwise -- round up the requested value and reserve that.
* Prefetchable32BitMmio:
* 0 -- no reservation, proceed to Prefetchable64BitMmio;
* (-1) -- firmware default, i.e. no reservation, proceed to
Prefetchable64BitMmio;
* otherwise -- round up the requested value and reserve that. (NB, if
Prefetchable32BitMmio is reserved in addition to
NonPrefetchable32BitMmio, then PciBusDxe currently runs into an
assertion failure. Refer to
<https://bugzilla.tianocore.org/show_bug.cgi?id=720>.)
* Prefetchable64BitMmio:
* only reached if Prefetchable32BitMmio was not reserved;
* 0 -- no reservation;
* (-1) -- firmware default, i.e. no reservation;
* otherwise -- round up the requested value and reserve that.
If QEMU_PCI_BRIDGE_CAPABILITY_RESOURCE_RESERVATION is missing, plus any
time the rounding fails, fall back to the current defaults.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Marcel Apfelbaum <marcel@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Extract the SetIoPadding() and SetMmioPadding() functions, so that we can
set EFI_ACPI_ADDRESS_SPACE_DESCRIPTOR fields using parameter names and
values that are more friendly than the original field names and their
expected values.
Introduce the HighBitSetRoundUp32() and HighBitSetRoundUp64() functions
for calculating the last parameter ("SizeExponent") of SetIoPadding() and
SetMmioPadding().
Put the new functions to use when requesting the default reservations. (In
order to be consistent with a later patch, "SizeExponent" is calculated
for SetIoPadding() with HighBitSetRoundUp64().)
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Marcel Apfelbaum <marcel@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
PciHotPlugInitDxe has a static variable called "mPadding" (of type
RESOURCE_PADDING), which describes two constant resource reservations:
- MmioPadding: 2MB of non-prefetchable (hence 32-bit) MMIO space,
- IoPadding: 512B of IO space.
In the GetResourcePadding() member function of
EFI_PCI_HOT_PLUG_INIT_PROTOCOL, the driver outputs a dynamically allocated
verbatim copy of "mPadding", for PciBusDxe to consume in its
ApplyResourcePadding() function.
In a later patch, we're going to compose the set of resource reservations
dynamically, based on QEMU hints. Generalize the RESOURCE_PADDING
structure so that we may generate (or not generate) each resource type
individually:
- Replace the named "MmioPadding" and "IoPadding" fields in
RESOURCE_PADDING with an array of descriptors,
- remove "mPadding",
- in GetResourcePadding(), request the same (default) reservations as
before, as if we attempted and failed to fetch the QEMU hints.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Marcel Apfelbaum <marcel@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The non-prefetchable MMIO aperture of a bridge can never fall outside of
the 32-bit address space. Namely, the MemoryBase and MemoryLimit fields in
PCI_BRIDGE_CONTROL_REGISTER have type UINT16, and based on the PCI-to-PCI
Bridge Architecture Spec, Chapter 3.2, the actual MMIO aperture is
determined as in:
NonPrefetchMemoryBase = (((MemoryBase & 0xFFF0u) >> 4) << 20) | 0x00000
NonPrefetchMemoryLimit = (((MemoryLimit & 0xFFF0u) >> 4) << 20) | 0xFFFFF
In "OvmfPkg/PciHotPlugInitDxe", the
"mPadding.MmioPadding.AddrSpaceGranularity" field is currently initialized
to 64. According to the above, this is useless generality: a
non-prefetchable MMIO reservation may only be satisfied from 32-bit
address space. Update the field to 32.
In practice this change makes no difference, because PciBusDxe already
enforces the 32-bit limitation when it sees "non-prefetchable" from
(SpecificFlag==0). Quoting commit 8aba40b792 ("OvmfPkg: add
PciHotPlugInitDxe", 2016-06-30): "regardless of our request for 64-bit
MMIO reservation, it is downgraded to 32-bit".
(See the Platform Init Spec 1.6, Volume 5,
- Table 8. "ACPI 2.0 & 3.0 QWORD Address Space Descriptor Usage", and
- Table 11. "Memory Resource Flag (Resource Type = 0) Usage",
for an explanation of the "mPadding.MmioPadding" fields.)
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Marcel Apfelbaum <marcel@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Fixes: 8aba40b792
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The driver always produces an instance of the
EFI_PCI_HOT_PLUG_INIT_PROTOCOL. The "SOMETIMES_PRODUCES" remark is an
oversight from the original v1->v2 patch update; v2 should have stated
"ALWAYS_PRODUCES":
http://mid.mail-archive.com/1468242274-12686-5-git-send-email-lersek@redhat.com
> Notes:
> v2:
> - drop the PcdPciBusHotplugDeviceSupport check, and the PcdLib
> dependency with it [Jordan]
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Marcel Apfelbaum <marcel@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Fixes: 8aba40b792
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
QEMU has recently gained the ability to provide various hints about its
PCI bridges. The hints take the form of vendor-specific PCI capabilities.
Define macros and types under "OvmfPkg/Include/IndustryStandard" to
describe these capabilities.
The definitions correspond to "docs/pcie_pci_bridge.txt" in the QEMU tree.
Said documentation was added in the last commit of the following series:
a35fe226558a hw/pci: introduce pcie-pci-bridge device
70e1ee59bb94 hw/pci: introduce bridge-only vendor-specific capability to
provide some hints to firmware
226263fb5cda hw/pci: add QEMU-specific PCI capability to the Generic PCI
Express Root Port
c1800a162765 docs: update documentation considering PCIE-PCI bridge
We are going to parse the Resource Reservation Capability in
OvmfPkg/PciHotPlugInitDxe, and return the reservation requests to
PciBusDxe.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Marcel Apfelbaum <marcel@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The feature is primarily useful for modern AARCH64 guests that have no
built-in virtio block / SCSI drivers; as on "qemu-system-aarch64 -M virt",
there are no IDE or AHCI controllers that could be used as fallback. XHCI
is available in "-M virt" however, and because XHCI predates AARCH64 by
several years, said guests are expected to have built-in drivers for it.
Other device models ("usb-uas", "usb-bot") are out of scope for now,
similarly to USB1.x (UHCI) and USB2 (EHCI) host controllers, and similarly
to USB hubs (which are USB1.1 only). In particular, port mapping between
EHCI and companion UHCI controllers is very complex; it even leads to PCI
slot/function differences between the OpenFirmware device paths exported
by QEMU and the the UEFI device paths generated by edk2.
The number of ports on the XHCI controller defaults to 4, but it can be
raised via the "p3" property to 15. In addition, several XHCI controllers
can be grouped into a single-slot, multi-function PCI device. These allow
for a good number of usb-storage devices, while their desired boot order
remains recognizable to this patch.
In the example below, we create two XHCI controllers, grouped into PCI
slot 00:02 as functions 0 and 1. Both controllers are given 15 ports. We
attach a "usb-storage" device to controller 1 at port 3 (ports are 1-based
in QEMU, 0-based in edk2), and attach another "usb-storage" device to
controller 2 at port 9.
QEMU command line options (NB. they apply equally to aarch64/virt and
x86_64/{i440fx,q35}):
-device qemu-xhci,id=xhci1,p3=15,addr=02.0,multifunction=on \
-device qemu-xhci,id=xhci2,p3=15,addr=02.1 \
\
-drive id=disk1,if=none,format=qcow2,$DISK1_OPTIONS \
-drive id=disk2,if=none,format=qcow2,$DISK2_OPTIONS \
\
-device usb-storage,drive=disk1,bus=xhci1.0,port=3,bootindex=1 \
-device usb-storage,drive=disk2,bus=xhci2.0,port=9,bootindex=2 \
Libvirt domain XML fragment:
<controller type='usb' index='1' model='qemu-xhci' ports='15'>
<address type='pci'
domain='0x0000' bus='0x00' slot='0x02' function='0x0'
multifunction='on'/>
</controller>
<controller type='usb' index='2' model='qemu-xhci' ports='15'>
<address type='pci'
domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
</controller>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='...'/>
<target dev='sda' bus='usb'/>
<boot order='1'/>
<address type='usb' bus='1' port='3'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='...'/>
<target dev='sdb' bus='usb'/>
<boot order='2'/>
<address type='usb' bus='2' port='9'/>
</disk>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
In commit db27e9f3d8 ("OvmfPkg/LegacyRegion: Support legacy region
manipulation of Q35", 2016-03-15), Ray extended the
OvmfPkg/Csm/CsmSupportLib PAM register manipulation to Q35. However, we
missed that the same should be done to the QemuVideoDxe VBE Shim as well.
The omission has caused no problems in practice on Q35, because QEMU has
let us write to the ROM area, regardless of the PAM1 setting, all this
time. This has now changed with recent QEMU commit 208fa0e43645 ("pc: make
'pc.rom' readonly when machine has PCI enabled", 2017-07-28). The QEMU
commit exposes the OVMF bug when Windows 7 is started on Q35, using QEMU
2.10 -- the VBE Shim is no longer put in place and Windows 7 cannot find
it.
To remedy this, assign the "Pam1Address" local variable a PciLib address
that matches the board type (i440fx vs. q35).
Regarding the PcdLib dependency: QemuVideoDxe already uses PcdLib, both
directly (see "PcdDriverSupportedEfiVersion") and indirectly (e.g. via the
DxePciLibI440FxQ35 PciLib instance). Add PcdLib to [LibraryClasses] for
completeness.
Cc: Aleksei Kovura <alex3kov@zoho.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Ref: https://bugs.launchpad.net/qemu/+bug/1715700
Reported-by: Aleksei Kovura <alex3kov@zoho.com>
Special-thanks-to: Gerd Hoffmann <kraxel@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Aleksei Kovura <alex3kov@zoho.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
This clarifies the purpose of the local variable in InstallVbeShim().
Cc: Aleksei Kovura <alex3kov@zoho.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Ref: https://bugs.launchpad.net/qemu/+bug/1715700
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Aleksei Kovura <alex3kov@zoho.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
* Introduce the PIIX4_PAM* and MCH_PAM* macros under
"OvmfPkg/Include/IndustryStandard". These macros capture the PAM
register offsets (in PCI config space) on the respective Memory
Controller B/D/F, from the respective data sheets.
* Under IndustryStandard, introduce the PMC_REGISTER_PIIX4() macro for
PIIX4. (For Q35, we already have DRAMC_REGISTER_Q35().) In both cases,
the B/D/F is 0/0/0.
* Under CsmSupportLib, replace the "PAMRegOffset" field (UINT8) in the
PAM_REGISTER_VALUE structure with "PAMRegPciLibAddress" (UINTN). The new
field contains the return value of the PCI_LIB_ADDRESS() macro.
* Under CsmSupportLib, replace the "mRegisterValues440" elements as
follows:
REG_PAMx_OFFSET_440, ReadEnableData, WriteEnableData
-->
PMC_REGISTER_PIIX4 (PIIX4_PAMx), ReadEnableData, WriteEnableData
* Under CsmSupportLib, replace the "mRegisterValuesQ35" elements as
follows:
REG_PAMx_OFFSET_Q35, ReadEnableData, WriteEnableData
-->
DRAMC_REGISTER_Q35 (MCH_PAMx), ReadEnableData, WriteEnableData
* Under CsmSupportLib, update the register address calculations as follows
(for all of PciOr8(), PciAnd8() and PciRead8()):
PCI_LIB_ADDRESS (
PAM_PCI_BUS,
PAM_PCI_DEV,
PAM_PCI_FUNC,
mRegisterValues[Index].PAMRegOffset
)
-->
mRegisterValues[Index].PAMRegPciLibAddress
* Under CsmSupportLib, remove the PAM_PCI_* and REG_PAM*_OFFSET_* macros.
Technically speaking, these changes could be split into three patches
(IndustryStandard macro additions, CsmSupportLib code updates,
CsmSupportLib macro removals). However, the patch is not big, and in this
case it is actually helpful to present the code movement / refactoring in
one step, for easier verification.
Cc: Aleksei Kovura <alex3kov@zoho.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Ref: https://bugs.launchpad.net/qemu/+bug/1715700
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Aleksei Kovura <alex3kov@zoho.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
VirtioNetDxe driver has been updated to use IOMMU-like member functions
from VIRTIO_DEVICE_PROTOCOL to translate the system physical address to
device address. We do not need to do anything special when
VIRTIO_F_IOMMU_PLATFORM bit is present hence treat it in parallel with
VIRTIO_F_VERSION_1.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
When device is behind the IOMMU, driver is require to pass the device
address of caller-supplied transmit buffer for the bus master operations.
The patch uses VirtioNetMapTxBuf() to map caller-supplied Tx packet to a
device-address and enqueue the device address in VRING for transfer and
perform the reverse mapping when transfer is completed so that we can
return the caller-supplied buffer.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
When device is behind IOMMU, driver is require to pass the device address
of TxBuf in the Tx VRING. The patch adds helper functions and data
structure to map and unmap the TxBuf system physical address to a device
address.
Since the TxBuf is returned back to caller from VirtioNetGetStatus() hence
we use OrderedCollection interface to save the TxBuf system physical to
device address mapping. After the TxBuf is succesfully transmitted
VirtioNetUnmapTxBuf() does the reverse lookup in OrderedCollection data
structure to get the system physical address of TxBuf for a given device
address.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
In next patches we will update Virtio transmit to use the device-mapped
address of the caller-supplied packet. The patch documents the new model.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Each network packet is submitted for transmission by pushing the head
descriptor of a two-part descriptor chain to the Available Ring of the
TX queue. VirtioNetInitTx() sets up the the descriptor chains for all
queueable packets in advance, and points all the head descriptors to the
same shared, never modified, VIRTIO_1_0_NET_REQ header object (or its
initial VIRTIO_NET_REQ sub-object, dependent on virtio version).
VirtioNetInitTx() currently uses the header object's system physical
address for populating the head descriptors.
When device is behind the IOMMU, VirtioNet driver is required to provide
the device address of VIRTIO_1_0_NET_REQ header. In this patch we
dynamically allocate the header using AllocateSharedPages() and map with
BusMasterCommonBuffer so that header can be accessed by both processor
and the device.
We map the header object for CommonBuffer operation because, in order to
stick with the current code order, we populate the head descriptors with
the header's device address first, and fill in the header itself second.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
When device is behind the IOMMU, VirtioNetDxe is required to use the
device address in bus master operations. RxBuf is allocated using
AllocatePool() which returns the system physical address.
The patch uses VIRTIO_DEVICE_PROTOCOL.AllocateSharedPages() to allocate
the RxBuf and map with VirtioMapAllBytesInSharedBuffer() so that we can
obtain the device address for RxBuf.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
When device is behind the IOMMU then driver need to pass the device
address when programing the bus master. The patch uses VirtioRingMap() to
map the VRING system physical address[es] to device address[es].
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
When a UEFI_DRIVER attempts to open a protocol interface with BY_DRIVER
attribute that it already has open with BY_DRIVER attribute,
OpenProtocol() returns EFI_ALREADY_STARTED. This is not an error. The
UEFI-2.7 spec currently says,
> EFI_ALREADY_STARTED -- Attributes is BY_DRIVER and there is an item on
> the open list with an attribute of BY_DRIVER
> whose agent handle is the same as AgentHandle.
(In fact it is so much an expected condition that recent USWG Mantis
ticket <https://mantis.uefi.org/mantis/view.php?id=1815> will codify its
additional edk2-specific behavior, namely to output the protocol interface
at once.)
Downgrade the log mask for this one condition to DEBUG_INFO, in
SataControllerStart(). This will match the log mask of the other two
informative messages in this function, "SataControllerStart START", and
"SataControllerStart END status = %r" (at which point Status can only be
EFI_SUCCESS).
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
"Boot Mode:%x" is an informative message, not an error report. Set its
debug mask to DEBUG_INFO.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
"Platform PEIM Loaded" is an informative message, not an error report. Set
its debug mask to DEBUG_INFO.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Since commit 19c6d9feaa ("MdePkg: Expand BaseIoLibIntrinsic (IoLib
class) library", 2017-01-14), IoWriteFifo8() has been widely available to
modules. Use it to print debug messages and assertion failures to the QEMU
debug port, rather than open-coded loops.
In the general case this speeds up logging, because debug messages will
now trap to QEMU once per message (as opposed to once per character), due
to "REP OUTSB" in "MdePkg/Library/BaseIoLibIntrinsic/*/IoFifoSev.nasm".
In SEV guests, there is no speedup (SEV doesn't support the REP prefix).
SEV is detected internally to BaseIoLibIntrinsic.
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Brijesh Singh <brijesh.singh@amd.com>
This patch enables UDF file system support by default.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Paulo Alcantara <pcacjr@zytor.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
Register an ExitBootServices() callback that tears down all IOMMU
mappings, without modifying the UEFI memory map.
The trick is that in the ExitBootServices() callback, we don't immediately
do the work; instead we signal another (private) event.
Normally the dispatch order of ExitBootServices() callbacks is unspecified
(within the same task priority level anyway). By queueing another
function, we delay the unmapping until after all PciIo and Virtio drivers
abort -- in their own ExitBootServices() callbacks -- the pending DMA
operations of their respective controllers.
Furthermore, the fact that IoMmuUnmapWorker() rewrites client-owned memory
when it unmaps a Write or CommonBuffer bus master operation, is safe even
in this context. The existence of any given "MapInfo" in "mMapInfos"
implies that the client buffer pointed-to by "MapInfo->CryptedAddress" was
live when ExitBootServices() was entered. And, after entering
ExitBootServices(), nothing must have changed the UEFI memory map, hence
the client buffer at "MapInfo->CryptedAddress" still exists.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
IoMmuUnmapWorker() is identical to IoMmuUnmap(), it just takes an
additional BOOLEAN parameter called "MemoryMapLocked". If the memory map
is locked, IoMmuUnmapWorker() does its usual job, but it purposely leaks
memory rather than freeing it. This makes it callable from
ExitBootServices() context.
Turn IoMmuUnmap() into a thin wrapper around IoMmuUnmapWorker() that
passes constant FALSE for "MemoryMapLocked".
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
The "mRecycledMapInfos" list implements an internal pool of unused
MAP_INFO structures between the IoMmuUnmap() and IoMmuMap() functions. The
original goal was to allow IoMmuUnmap() to tear down CommonBuffer mappings
without releasing any memory: IoMmuUnmap() would recycle the MAP_INFO
structure to the list, and IoMmuMap() would always check the list first,
before allocating a brand new MAP_INFO structure.
In one of the following patches, we'll change OvmfPkg/IoMmuDxe so that it
unmaps all existent bus master operations (CommonBuffer, Read, Write) at
ExitBootServices(), strictly after the individual device drivers abort
pending DMA on the devices they manage, in their own ExitBootServices()
notification functions.
For this, rename and repurpose the list to track all live mappings.
This means that IoMmuUnmap() will always release a MAP_INFO structure
(even when cleaning up a CommonBuffer operation). That's fine (for now),
because device drivers are no longer expected to call Unmap() in their
ExitBootServices() notification functions.
In theory, we could also move the allocation and freeing of the stash
buffer from IoMmuAllocateBuffer() and IoMmuFreeBuffer(), respectively, to
IoMmuMap() and IoMmuUnmap(). However, this would require allocating and
freeing a stash buffer in *both* IoMmuMap() and IoMmuUnmap(), as
IoMmuMap() performs in-place decryption for CommonBuffer operations, and
IoMmuUnmap() performs in-place encryption for the same.
By keeping the stash buffer allocation as-is, not only do we keep the code
almost fully undisturbed, but
- we also continue to guarantee that IoMmuUnmap() succeeds: allocating a
stash buffer in IoMmuUnmap(), for in-place encryption after a
CommonBuffer operation, could fail;
- we also keep IoMmuUnmap() largely reusable for ExitBootServices()
callback context: allocating a stash buffer in IoMmuUnmap() would simply
be forbidden in that context.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
In one of the following patches, we'll change OvmfPkg/IoMmuDxe so that it
unmaps all existent bus master operations (CommonBuffer, Read, Write) at
ExitBootServices(), strictly after the individual device drivers abort
pending DMA on the devices they manage, in their own ExitBootServices()
notification functions.
In preparation, remove the explicit
VIRTIO_DEVICE_PROTOCOL.UnmapSharedBuffer() call from VirtioScsiExitBoot(),
originally added in commit fc2168feb2 ("OvmfPkg/VirtioScsiDxe: map VRING
using VirtioRingMap()", 2017-08-31).
Add a DEBUG message so we can observe the ordering between
VirtioScsiExitBoot() and the upcoming cleanup of mappings in
OvmfPkg/IoMmuDxe.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
In one of the following patches, we'll change OvmfPkg/IoMmuDxe so that it
unmaps all existent bus master operations (CommonBuffer, Read, Write) at
ExitBootServices(), strictly after the individual device drivers abort
pending DMA on the devices they manage, in their own ExitBootServices()
notification functions.
In preparation, remove the explicit
VIRTIO_DEVICE_PROTOCOL.UnmapSharedBuffer() call from VirtioRngExitBoot(),
originally added in commit 0a568ccbcb ("OvmfPkg/VirtioRngDxe: map host
address to device address", 2017-08-23).
Add a DEBUG message so we can observe the ordering between
VirtioRngExitBoot() and the upcoming cleanup of mappings in
OvmfPkg/IoMmuDxe.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
In one of the following patches, we'll change OvmfPkg/IoMmuDxe so that it
unmaps all existent bus master operations (CommonBuffer, Read, Write) at
ExitBootServices(), strictly after the individual device drivers abort
pending DMA on the devices they manage, in their own ExitBootServices()
notification functions.
In preparation, remove the explicit
VIRTIO_DEVICE_PROTOCOL.UnmapSharedBuffer() calls from VirtioGpuExitBoot(),
originally added in commit 9bc5026c19 ("OvmfPkg/VirtioGpuDxe: map VRING
for bus master common buffer operation", 2017-08-26) and commit
f10ae92366 ("OvmfPkg/VirtioGpuDxe: map backing store to bus master
device address", 2017-08-26).
Add a DEBUG message so we can observe the ordering between
VirtioGpuExitBoot() and the upcoming cleanup of mappings in
OvmfPkg/IoMmuDxe.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
In one of the following patches, we'll change OvmfPkg/IoMmuDxe so that it
unmaps all existent bus master operations (CommonBuffer, Read, Write) at
ExitBootServices(), strictly after the individual device drivers abort
pending DMA on the devices they manage, in their own ExitBootServices()
notification functions.
In preparation, remove the explicit
VIRTIO_DEVICE_PROTOCOL.UnmapSharedBuffer() call from VirtioBlkExitBoot(),
originally added in commit 1916513047 ("OvmfPkg/VirtioBlkDxe: map VRING
using VirtioRingMap()", 2017-08-27).
Add a DEBUG message so we can observe the ordering between
VirtioBlkExitBoot() and the upcoming cleanup of mappings in
OvmfPkg/IoMmuDxe.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
In earlier PEI stage, temporary memory at PcdOvmfSecPeiTempRamBase is
employed as stack and heap. We move them to the new room and do some
relocation fixup when permanent memory becomes available.
TemporaryRamMigration() is responsible for switching the stack.
Before entering TemporaryRamMigration(), Ebp/Rbp is populated with the
content of Esp/Rsp and used as frame pointer.
After the execution of SetJump/LongJump, stack migrates to new position
while the context keeps unchanged.
But when TemporaryRamMigration() exits, Esp/Rsp is filled with
the content of Ebp/Rbp to destroy this stack frame.
The result is, stack switches back to previous temporary memory.
When permanent memory becomes available, modules that have registered
themselves for shadowing will be scheduled to execute. Some of them
need to consume more memory(heap/stack). Contrast to temporary stack,
permanent stack possesses larger space.
The potential risk is overflowing the stack if stack staying in
temporary memory. When it happens, system may crash during S3 resume.
More detailed information:
> (gdb) disassemble /r
> Dump of assembler code for function TemporaryRamMigration:
> 0x00000000fffcd29c <+0>: 55 push %rbp
> 0x00000000fffcd29d <+1>: 48 89 e5 mov %rsp,%rbp
> 0x00000000fffcd2a0 <+4>: 48 81 ec 70 01 00 00 sub
> $0x170,%rsp
> ...
> ...
> 0x00000000fffcd425 <+393>: e8 80 10 00 00 callq 0xfffce4aa
> <SaveAndSetDebugTimerInterrupt>
> => 0x00000000fffcd42a <+398>: b8 00 00 00 00 mov $0x0,%eax
> 0x00000000fffcd42f <+403>: c9 leaveq
> 0x00000000fffcd430 <+404>: c3 retq
> End of assembler dump.
See the description of leave(opcode: c9), from
Intel 64 and IA-32 Architectures Software Developer's Manual, Volume 2A
"Releases the stack frame set up by an earlier ENTER instruction. The
LEAVE instruction copies the frame pointer (in the EBP register) into
the stack pointer register (ESP), which releases the stack space
allocated to the stack frame. The old frame pointer (the frame pointer
for the calling procedure that was saved by the ENTER instruction) is
then popped from the stack into the EBP register, restoring the calling
procedure’s stack frame."
To solve this, update Ebp/Rbp too when Esp/Rsp is updated
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ge Song <ge.song@hxt-semitech.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
VirtioGpuDxe is now IOMMU-clean; it translates system memory addresses to
bus master device addresses. Negotiate VIRTIO_F_IOMMU_PLATFORM in parallel
with VIRTIO_F_VERSION_1. (Note: the VirtIo GPU device, and this driver,
are virtio-1.0 only (a.k.a. "modern-only").)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
VirtioGpuDxe is a UEFI Bus driver (not a Device driver). This is because a
UEFI graphics driver is expected to produce its GraphicsOutput protocol
instance(s) on new child handle(s) of the video controller handle, one
child handle (plus GOP) per video output (or, one child handle plus GOP
per combination of multiple video outputs).
In VirtioGpuDxe, we support a single VirtIo GPU head (scanout), namely
head#0. This means that, with regard to a specific VirtIo GPU device, the
driver may be in one of three states, at any time:
[1] VirtioGpuDxe has not bound the device at all,
[2] VirtioGpuDxe has bound the device, but not produced the sole child
handle for head#0,
[3] VirtioGpuDxe has bound the device, and produced the sole child handle
for head#0, with a GOP instance on the child handle.
(Which state the driver is in wrt. a given VirtIo GPU device depends on
the VirtioGpuDriverBindingStart() / VirtioGpuDriverBindingStop()
invocations issued by the ConnectController() / DisconnectController()
boot services. In turn those come from BDS or e.g. the UEFI shell.)
The concept of "current video mode" is technically tied to the GOP (i.e.,
the child handle, state [3] only), not the VirtIo GPU controller handle.
This is why we manage the storage that backs the current video mode in our
EFI_GRAPHICS_OUTPUT_PROTOCOL.SetMode() member implementation.
GopSetMode() is first called *internally*, when we enter state [3] (that
is, when we produce the child handle + GOP on it):
VirtioGpuDriverBindingStart() [DriverBinding.c]
InitVgpuGop() [DriverBinding.c]
VgpuGop->Gop.SetMode() [Gop.c]
When this happens, we allocate the backing store *without* having a
preexistent backing store (due to no preexistent video mode and GOP).
Skipping VirtIo GPU details not relevant for this patch, we just note that
the backing store is exposed *permanently* to the VirtIo GPU device, with
the RESOURCE_ATTACH_BACKING command.
When external clients call the EFI_GRAPHICS_OUTPUT_PROTOCOL.Blt() member
function -- called GopBlt() in this driver --, in state [3], the function
operates on the backing store, and sends only small messages to the VirtIo
GPU device.
When external clients call GopSetMode() for switching between video modes
-- in state [3] --, then
- a new backing store is allocated and exposed to the device (attached to
a new host-side VirtIo GPU resource),
- head#0 is flipped to the new backing store,
- on success, the ReleaseGopResources() function both detaches the
previous backing store from the VirtIo GPU device, an releases it. The
new backing store address and size are saved in our GOP object. (In
other words, we "commit" to the new video mode.)
When the DisconnectController() boot service asks us to leave state [3] --
we can leave it directly only for state [2] --, then the
ReleaseGopResources() function is called on a different path:
VirtioGpuDriverBindingStop() [DriverBinding.c]
UninitVgpuGop() [DriverBinding.c]
ReleaseGopResources() [Gop.c]
In this case, the backing store being released is still in use (we're not
leaving it for a new mode -- head#0 has not been flipped "away" from it),
so in ReleaseGopResources() we disable head#0 first.
(The ReleaseGopResources() function is called the same way on the error
path in InitVgpuGop(), if the first -- internal -- VgpuGop->Gop.SetMode()
call succeeds, but the rest of InitVgpuGop() fails.)
Based on the above, for IOMMU-compatibility,
- in GopSetMode(), don't just allocate, but also map the backing store of
the nascent video mode to a device address, for bus master common buffer
operation,
- (the VirtioGpuAllocateZeroAndMapBackingStore() helper function
introduced in the last patch takes care of zeroing internally,)
- pass the device address to the VirtIo GPU device in the
RESOURCE_ATTACH_BACKING command,
- if GopSetMode() succeeds, save the mapping token,
- if GopSetMode() fails, don't just free but also unmap the still-born
backing store,
- in ReleaseGopResources(), don't just free but also unmap the backing
store -- which is the previous backing store if we're mode-switching,
and the current backing store if we're leaving state [3].
Finally, ExitBootServices() may be called when the driver is in either
state [1], [2] or [3], wrt. a given VirtIo GPU device. (Of course we are
only notified in states [2] and [3].) If we get the notification in state
[3], then the current video mode's backing store has to be unmapped, but
not released. (We must not change the UEFI memory map.)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Introduce the VirtioGpuAllocateZeroAndMapBackingStore() and
VirtioGpuUnmapAndFreeBackingStore() helper functions. These functions tie
together the allocation, zeroing and mapping, and unmapping and
deallocation, respectively, of memory that the virtio GPU will permanently
reference after receiving the RESOURCE_ATTACH_BACKING command.
With these functions we can keep the next patch simpler -- the GOP
implementation in "Gop.c" retains its error handling structure, and
remains oblivious to VIRTIO_DEVICE_PROTOCOL and VirtioLib.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
The RESOURCE_ATTACH_BACKING virtio GPU command assigns guest-side backing
pages to a host-side resource that was created earlier with the
RESOURCE_CREATE_2D command.
We compose the RESOURCE_ATTACH_BACKING command in the
VirtioGpuResourceAttachBacking() function. Currently this function takes
the parameter
IN VOID *FirstBackingPage
This is only appropriate as long as we pass a (guest-phys) system memory
address to the device. In preparation for a mapped bus master device
address, change the above parameter to
IN EFI_PHYSICAL_ADDRESS BackingStoreDeviceAddress
In order to keep the current call site functional, move the (VOID*) to
(UINTN) conversion out of the function, to the call site.
The "Request.Entry.Addr" field already has type UINT64.
This patch is similar to commit 4b725858de ("OvmfPkg/VirtioLib: change
the parameter of VirtioAppendDesc() to UINT64", 2017-08-23).
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Every virtio GPU command used by VirtioGpuDxe is synchronous and formatted
as a two-descriptor chain: request, response. The internal workhorse
function that all the command-specific functions call for such messaging
is VirtioGpuSendCommand().
In VirtioGpuSendCommand(), map the request from system memory to bus
master device address for BusMasterRead operation, and map the response
from system memory to bus master device address for BusMasterWrite
operation.
Pass the bus master device addresses to VirtioAppendDesc(). (See also
commit 4b725858de, "OvmfPkg/VirtioLib: change the parameter of
VirtioAppendDesc() to UINT64", 2017-08-23.)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
VirtioGpuDxe uses one virtio ring, for VIRTIO_GPU_CONTROL_QUEUE.
Map it for bus master common buffer operation with VirtioRingMap(), so
that it can be accessed equally by both guest and hypervisor even if an
IOMMU is used. (VirtioRingInit() already allocates the ring suitably for
this, see commit b0338c5329, "OvmfPkg/VirtioLib: alloc VRING buffer with
AllocateSharedPages()", 2017-08-23).
Pass the resultant translation offset ("RingBaseShift"), from system
memory address to bus master device address, to VIRTIO_SET_QUEUE_ADDRESS.
Unmap the ring in all contexts where the ring becomes unused (these
contexts are mutually exclusive):
- in VirtioGpuInit(): the ring has been mapped, but we cannot complete the
virtio initialization for another reason,
- in VirtioGpuUninit(): the virtio initialization has succeeded, but
VirtioGpuDriverBindingStart() fails for another reason, or
VirtioGpuDriverBindingStop() unbinds the device after use,
- in VirtioGpuExitBoot(): ExitBootServices() is called after
VirtioGpuDriverBindingStart() has successfully bound the device.
(Unmapping the ring does not change the UEFI memory map.)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Log all relevant IN parameters on entry. (There are only IN parameters.)
Beautify the format string.
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Log all relevant IN and IN OUT parameters on entry.
(Note that the HostAddress parameter is IN OUT rather than OUT due to
historical reasons. The "IN EFI_ALLOCATE_TYPE Type" parameter is now to be
ignored, but historically it could be set to AllocateMaxAddress for
example, and for that HostAddress had to be IN OUT.)
When exiting with success, log all relevant OUT parameters (i.e.,
HostAddress). Also log the new (internal) StashBuffer address, on which
IoMmuMap() and IoMmuUnmap() rely on, for BusMasterCommonBuffer operations
(in-place decryption and encryption, respectively).
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
The only important external information for this function, and for the
human looking at the log, is the Mapping input parameter. Log it on entry.
Stop logging the contents of the MAP_INFO structure pointed-to by Mapping.
Thanks to the previous patch, we can now associate IoMmuUnmap() messages
with IoMmuMap() messages -- and thereby with MAP_INFO contents -- purely
via Mapping.
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Log all relevant IN and IN OUT parameters on entry.
When exiting with success, log all relevant OUT and IN OUT parameters.
Don't log OUT and IN OUT parameters that are never set or changed after
entering the function (i.e., *NumberOfBytes).
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Debug messages that start as natural (English) language phrases (after the
debug prefix) should uniformly begin with lower-case or upper-case. In
SetMemoryEncDec() we have a mixture now. Stick with lower-case.
(Upper-case is better for full sentences that also end with punctuation.)
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
In SetMemoryEncDec(), we have four locations where we (a) log a message on
the DEBUG_WARN level that says "ERROR", (b) return the status code
RETURN_NO_MAPPING right after.
These messages clearly describe actual errors (bad PML4, PDPE, PDE, PTE).
Promote their debug levels to DEBUG_ERROR, and remove the word "ERROR"
from the messages.
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
In the SetMemoryEncDec() function, the way we currently report
PhysicalAddress is not uniform:
- mostly we say "for %lx",
- in one spot we say "at %lx" (even though the 2MB page being split does
not live *at* PhysicalAddress, instead it maps PhysicalAddress),
- in another spot we don't log PhysicalAddress at all (when splitting a
1GB page).
Unify this, using the format string "for Physical=0x%Lx".
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
None of the DEBUG macro invocations in SetMemoryEncDec() fit on a single
line. Break them to multiple lines, for (a) conforming to the coding style
spec, (b) easier modification in later patches.
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
VirtioScsiDxe driver has been updated to use IOMMU-like member functions
from VIRTIO_DEVICE_PROTOCOL to translate the system physical address to
device address. We do not need to do anything special when
VIRTIO_F_IOMMU_PLATFORM bit is present hence treat it in parallel with
VIRTIO_F_VERSION_1.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
When device is behind the IOMMU, driver is require to pass the device
address of virtio request, response and any memory referenced by those
request/response to the bus master.
The patch uses IOMMU-like member functions from VIRTIO_DEVICE_PROTOCOL to
map request and response buffers system physical address to the device
address.
- If the buffer need to be accessed by both the processor and a bus
master then map with BusMasterCommonBuffer.
- If the buffer need to be accessed for a write operation by a bus master
then map with BusMasterWrite.
However, after a BusMasterWrite Unmap() failure, error reporting via
EFI_EXT_SCSI_PASS_THRU_SCSI_REQUEST_PACKET would be very complex,
therefore we map such buffers too with BusMasterCommonBuffer.
- If the buffer need to be accessed for a read operation by a bus master
then map with BusMasterRead.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[lersek@redhat.com: restore lost sentence/paragraph in commit message]
[lersek@redhat.com: reindent/reflow "InDataBuffer" comment block]
[lersek@redhat.com: cast arg, not result, of EFI_SIZE_TO_PAGES() to UINTN]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
When virtio request fails we return EFI_DEVICE_ERROR, as per the spec
EFI_EXT_SCSI_PASS_THRU_PROTOCOL.PassThru() member function is required
to implement elaborated error reporting.
The patch refactors out entire block of the code that creates the host
adapter error into a separate helper function (ReportHostAdapterError).
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[lersek@redhat.com: fix style & typo in ReportHostAdapterError() comment]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
When device is behind the IOMMU then driver need to pass the device
address when programing the bus master. The patch uses VirtioRingMap() to
map the VRING system physical address to device address.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
when "RequestIsWrite" is FALSE -- i.e., the CPU wants data from
the device, we map "Buffer" for VirtioOperationBusMasterWrite. In
this case, checking the return status of
Dev->VirtIo->UnmapSharedBuffer (Dev->VirtIo, BufferMapping);
is must. If the unmapping fails, then "Buffer" will not contain the
actual data from the device, and we must fail the request with
EFI_DEVICE_ERROR.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[lersek@redhat.com: fix typos in subject]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
There's a small window between
- AllocFwCfgDmaAccessBuffer() mapping the new FW_CFG_DMA_ACCESS object for
common buffer operation (i.e., decrypting it), and
- InternalQemuFwCfgDmaBytes() setting the fields of the object.
In this window, earlier garbage in the object is "leaked" to the
hypervisor. So zero the object before we decrypt it.
(This commit message references AMD SEV directly, because QemuFwCfgDxeLib
is not *generally* enabled for IOMMU operation just yet, unlike our goal
for the virtio infrastructure. Instead, QemuFwCfgDxeLib uses
MemEncryptSevLib explicitly to detect SEV, and then relies on IOMMU
protocol behavior that is specific to SEV. At this point, this is by
design.)
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
VirtioBlkDxe driver has been updated to use IOMMU-like member functions
from VIRTIO_DEVICE_PROTOCOL to translate the system physical address to
device address. We do not need to do anything special when
VIRTIO_F_IOMMU_PLATFORM bit is present hence treat it in parallel with
VIRTIO_F_VERSION_1.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
When device is behind the IOMMU, driver is require to pass the device
address of virtio request, response and any memory referenced by those
request/response to the bus master.
The patch uses IOMMU-like member functions from VIRTIO_DEVICE_PROTOCOL to
map request and response buffers system physical address to the device
address.
- If the buffer need to be accessed by both the processor and a bus
master then map with BusMasterCommonBuffer.
- If the buffer need to be accessed for a write operation by a bus master
then map with BusMasterWrite.
- If the buffer need to be accessed for a read operation by a bus master
then map with BusMasterRead.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
When device is behind the IOMMU then driver need to pass the device
address when programing the bus master. The patch uses VirtioRingMap() to
map the VRING system physical address to device address.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
This reverts commit ca56256d5e0b7e63325b049e90a6bd03f90e3598:
TianoCore BZ#671 <https://bugzilla.tianocore.org/show_bug.cgi?id=671> has
been fixed in commit 2f7f1e73c1 ("BaseTools: Add the missing -pie link
option in GCC tool chain", 2017-08-23), so we can return to the GCC5
toolchain with gcc-7.*.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
VirtioRngDxe driver has been updated to use IOMMU-like member functions
from VIRTIO_DEVICE_PROTOCOL to translate the system physical address to
device address. We do not need to do anything special when
VIRTIO_F_IOMMU_PLATFORM bit is present hence treat it in parallel with
VIRTIO_F_VERSION_1.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
This feature indicates that the device is behind an IOMMU that translates
bus addresses from the device into physical addresses in memory. If this
feature bit is set to 0, then the device emits physical addresses which
are not translated further, even though an IOMMU may be present.
see [1] for more infromation
[1] https://lists.oasis-open.org/archives/virtio-dev/201610/msg00121.html
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
patch maps the host address to a device address for buffers (including
rings, device specifc request and response pointed by vring descriptor,
and any further memory reference by those request and response).
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[lersek@redhat.com: return EFI_DEVICE_ERROR if mapping fails in GetRNG]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
The patch change the "BufferPhysAddr" parameter of VirtioAppendDesc()
from type UINTN to UINT64.
UINTN is appropriate as long as we pass system memory references. After
the introduction of bus master device addresses, that's no longer the case
in general. Should we implement "real" IOMMU support at some point, UINTN
could break in 32-bit builds of OVMF.
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[lersek@redhat.com: clarify commit message]
[lersek@redhat.com: balance parens in VirtioAppendDesc() comment blocks]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
The VRING buffer is a communication area between guest and hypervisor.
Allocate it using VIRTIO_DEVICE_PROTOCOL.AllocateSharedPages() so that
it can be mapped later with VirtioRingMap() for bi-directional access.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[lersek@redhat.com: correct typo in VirtioRingInit() comment blocks]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Add a function to map the ring buffer with BusMasterCommonBuffer so that
ring can be accessed by both guest and hypervisor.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[lersek@redhat.com: fix typo in commit message]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
virtio drivers use VIRTIO_DEVICE_PROTOCOL.MapSharedBuffer() to map the
ring buffer host address to a device address. If an IOMMU is present then
RingBaseShift contains the offset from the host address.
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
For the case when an IOMMU is used for translating system physical
addresses to DMA bus master addresses, the transport-independent
virtio device drivers will be required to map their VRING areas to
bus addresses with VIRTIO_DEVICE_PROTOCOL.MapSharedBuffer() calls.
- MMIO and legacy virtio transport do not support IOMMU to translate the
addresses hence RingBaseShift will always be set to zero.
- modern virtio transport supports IOMMU to translate the address, in
next patch we will update the Virtio10Dxe to use RingBaseShift offset.
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[lersek@redhat.com: remove commit msg paragraph with VirtioLib reference]
[lersek@redhat.com: fix typo in VIRTIO_SET_QUEUE_ADDRESS comment block]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
The function can be used for mapping the system physical address to virtio
device address using VIRTIO_DEVICE_PROTOCOL.MapSharedBuffer (). The
function helps with centralizing error handling, and it allows the caller
to pass in constant or other evaluated expressions for NumberOfBytes.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[lersek@redhat.com: s/This/VirtIo/ in the new function's comment blocks]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
The patch extends VIRTIO_DEVICE_PROTOCOL to provide the following new
member functions:
- AllocateSharedPages : allocate a memory region suitable for sharing
between guest and hypervisor (e.g ring buffer).
- FreeSharedPages: free the memory allocated using AllocateSharedPages ().
- MapSharedBuffer: map a host address to device address suitable to share
with device for bus master operations.
- UnmapSharedBuffer: unmap the device address obtained through the
MapSharedBuffer().
We're free to extend the protocol structure without changing the protocol
GUID, or bumping any protocol version fields (of which we currently have
none), because VIRTIO_DEVICE_PROTOCOL is internal to edk2 by design --
see the disclaimers in "VirtioDevice.h".
The patch implements Laszlo's recommendation [1].
[1] http://mid.mail-archive.com/841bec5f-6f6e-8b1f-25ba-0fd37a915b72@redhat.com
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Now that we have dropped QemuVideoDxe from all QEMU targeted builds
under ArmVirtPkg, we can revert the ARM specific changes to it.
This partially reverts commits 84a75f70e9 (SVN 16890) and
05a5379458.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
When UefiCpuPkg/MpInitLib is built for X64 with gcc-7, using the DEBUG
build target and the GCC5 toolchain settings, a C-language assignment is
miscompiled such that the initial AP startup hangs in CpuMpPei (X64) or
CpuDxe (Ia32X64). See <https://bugzilla.tianocore.org/show_bug.cgi?id=671>
for a detailed analysis of the symptoms, and for mailing list links.
This issue has been reported several times (one example is
<https://bugzilla.tianocore.org/show_bug.cgi?id=657>). Until we (or the
upstream gcc developers) figure out how to dissuade gcc-7 from the
miscompilation, pick the GCC49 toolchain in "build.sh" for gcc-7.*.
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
In commit 8057622527 ("OvmfPkg/AcpiPlatformDxe: save fw_cfg boot script
with QemuFwCfgS3Lib", 2017-02-23), we replaced the explicit S3 boot script
manipulation in TransferS3ContextToBootScript() with a call to
QemuFwCfgS3CallWhenBootScriptReady(). (Passing AppendFwCfgBootScript() as
callback.)
QemuFwCfgS3CallWhenBootScriptReady() checks for fw_cfg DMA up-front, and
bails with RETURN_NOT_FOUND if fw_cfg DMA is missing.
(This is justified as the goal of QemuFwCfgS3Lib is to "enable[] driver
modules [...] to produce fw_cfg DMA operations that are to be replayed at
S3 resume time".)
In turn, if QemuFwCfgS3CallWhenBootScriptReady() fails, then
OvmfPkg/AcpiPlatformDxe rolls back any earlier linker/loader script
processing, and falls back to the built-in ACPI tables.
(This is also justified because failure to save WRITE_POINTER commands for
replaying at S3 resume implies failure to process the linker/loader script
comprehensively.)
Calling QemuFwCfgS3CallWhenBootScriptReady() from
TransferS3ContextToBootScript() *unconditionally* is wrong however. For
the case when the linker/loader script contains no WRITE_POINTER commands,
the call perpetuated an earlier side effect, and introduced another one:
(1) On machine types that provide fw_cfg DMA (i.e., 2.5+),
QemuFwCfgS3CallWhenBootScriptReady() would succeed, and allocate
workspace for the boot script opcodes in reserved memory. However, no
opcodes would actually be produced in the AppendFwCfgBootScript()
callback, due to lack of any WRITE_POINTER commands.
This waste of reserved memory had been introduced in earlier commit
df73df138d ("OvmfPkg/AcpiPlatformDxe: replay
QEMU_LOADER_WRITE_POINTER commands at S3", 2017-02-09).
(2) On machine types that lack fw_cfg DMA (i.e., 2.4 and earlier),
TransferS3ContextToBootScript() would now fail the linker/loader
script for no reason.
(Note that QEMU itself prevents adding devices that depend on
WRITE_POINTER if the machine type lacks fw_cfg DMA:
$ qemu-system-x86_64 -M pc-q35-2.4 -device vmgenid
qemu-system-x86_64: -device vmgenid: vmgenid requires DMA write
support in fw_cfg, which this machine type does not provide)
Short-circuit an empty S3_CONTEXT in TransferS3ContextToBootScript() by
dropping S3_CONTEXT on the floor. This is compatible with the current
contract of the function as it constitutes a transfer of ownership.
Regression (2) was found and reported by Dhiru Kholia as an OSX guest boot
failure on the "pc-q35-2.4" machine type:
http://mid.mail-archive.com/CANO7a6x6EaWNZ8y=MvLU=w_LjRLXserO3NmsgHvaYE0aUCCWzg@mail.gmail.com
Dhiru bisected the issue to commit 8057622527.
Cc: Dhiru Kholia <dhiru.kholia@gmail.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Fixes: df73df138d
Fixes: 8057622527
Reported-by: Dhiru Kholia <dhiru.kholia@gmail.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Dhiru Kholia <dhiru.kholia@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
In OVMF we currently get the upper (>=4GB) memory size with the
GetSystemMemorySizeAbove4gb() function.
The GetSystemMemorySizeAbove4gb() function is used in two places:
(1) It is the starting point of the calculations in GetFirstNonAddress().
GetFirstNonAddress() in turn
- determines the placement of the 64-bit PCI MMIO aperture,
- provides input for the GCD memory space map's sizing (see
AddressWidthInitialization(), and the CPU HOB in
MiscInitialization()),
- influences the permanent PEI RAM cap (the DXE core's page tables,
built in permanent PEI RAM, grow as the RAM to map grows).
(2) In QemuInitializeRam(), GetSystemMemorySizeAbove4gb() determines the
single memory descriptor HOB that we produce for the upper memory.
Respectively, there are two problems with GetSystemMemorySizeAbove4gb():
(1) It reads a 24-bit count of 64KB RAM chunks from the CMOS, and
therefore cannot return a larger value than one terabyte.
(2) It cannot express discontiguous high RAM.
Starting with version 1.7.0, QEMU has provided the fw_cfg file called
"etc/e820". Refer to the following QEMU commits:
- 0624c7f916b4 ("e820: pass high memory too.", 2013-10-10),
- 7d67110f2d9a ("pc: add etc/e820 fw_cfg file", 2013-10-18)
- 7db16f2480db ("pc: register e820 entries for ram", 2013-10-10)
Ever since these commits in v1.7.0 -- with the last QEMU release being
v2.9.0, and v2.10.0 under development --, the only two RAM entries added
to this E820 map correspond to the below-4GB RAM range, and the above-4GB
RAM range. And, the above-4GB range exactly matches the CMOS registers in
question; see the use of "pcms->above_4g_mem_size":
pc_q35_init() | pc_init1()
pc_memory_init()
e820_add_entry(0x100000000ULL, pcms->above_4g_mem_size, E820_RAM);
pc_cmos_init()
val = pcms->above_4g_mem_size / 65536;
rtc_set_memory(s, 0x5b, val);
rtc_set_memory(s, 0x5c, val >> 8);
rtc_set_memory(s, 0x5d, val >> 16);
Therefore, remedy the above OVMF limitations as follows:
(1) Start off GetFirstNonAddress() by scanning the E820 map for the
highest exclusive >=4GB RAM address. Fall back to the CMOS if the E820
map is unavailable. Base all further calculations (such as 64-bit PCI
MMIO aperture placement, GCD sizing etc) on this value.
At the moment, the only difference this change makes is that we can
have more than 1TB above 4GB -- given that the sole "high RAM" entry
in the E820 map matches the CMOS exactly, modulo the most significant
bits (see above).
However, Igor plans to add discontiguous (cold-plugged) high RAM to
the fw_cfg E820 RAM map later on, and then this scanning will adapt
automatically.
(2) In QemuInitializeRam(), describe the high RAM regions from the E820
map one by one with memory HOBs. Fall back to the CMOS only if the
E820 map is missing.
Again, right now this change only makes a difference if there is at
least 1TB high RAM. Later on it will adapt to discontiguous high RAM
(regardless of its size) automatically.
-*-
Implementation details: introduce the ScanOrAdd64BitE820Ram() function,
which reads the E820 entries from fw_cfg, and finds the highest exclusive
>=4GB RAM address, or produces memory resource descriptor HOBs for RAM
entries that start at or above 4GB. The RAM map is not read in a single
go, because its size can vary, and in PlatformPei we should stay away from
dynamic memory allocation, for the following reasons:
- "Pool" allocations are limited to ~64KB, are served from HOBs, and
cannot be released ever.
- "Page" allocations are seriously limited before PlatformPei installs the
permanent PEI RAM. Furthermore, page allocations can only be released in
DXE, with dedicated code (so the address would have to be passed on with
a HOB or PCD).
- Raw memory allocation HOBs would require the same freeing in DXE.
Therefore we process each E820 entry as soon as it is read from fw_cfg.
-*-
Considering the impact of high RAM on the DXE core:
A few years ago, installing high RAM as *tested* would cause the DXE core
to inhabit such ranges rather than carving out its home from the permanent
PEI RAM. Fortunately, this was fixed in the following edk2 commit:
3a05b13106, "MdeModulePkg DxeCore: Take the range in resource HOB for
PHIT as higher priority", 2015-09-18
which I regression-tested at the time:
http://mid.mail-archive.com/55FC27B0.4070807@redhat.com
Later on, OVMF was changed to install its high RAM as tested (effectively
"arming" the earlier DXE core change for OVMF), in the following edk2
commit:
035ce3b37c, "OvmfPkg/PlatformPei: Add memory above 4GB as tested",
2016-04-21
which I also regression-tested at the time:
http://mid.mail-archive.com/571E8B90.1020102@redhat.com
Therefore adding more "tested memory" HOBs is safe.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1468526
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Commit 09719a01b1 (OvmfPkg/QemuFwCfgLib: Implement SEV internal function
for Dxe phase) uses IOMMU protocol to allocate and free FW_CFG_DMA_ACCESS
buffer when SEV is active. During initial commits we made assumption that
IOMMU.AllocateBuffer() will provide PlainTextAddress (i.e C-bit cleared).
This assumption was wrong, the AllocateBuffer() protocol member is not
expected to produce a buffer that is immediatly usable, and client is
required to call Map() uncondtionally with BusMasterCommonBuffer[64] to
get a mapping which is accessable by both host and device.
The patch refactors code a bit and add the support to Map()
FW_CFG_DMA_ACCESS buffer using BusMasterCommonBuffer operation after
allocation and Unamp() before free.
The complete discussion about this and recommendation from Laszlo can be
found here [1]
[1] https://lists.01.org/pipermail/edk2-devel/2017-July/012652.html
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[lersek@redhat.com: convert pointers to UINTN before converting to UINT64]
[lersek@redhat.com: fix argument indentation in multi-line function call]
[lersek@redhat.com: explicitly compare pointers to NULL]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
In order for Unmap() to be callable from ExitBootServices() event handler
context (for cleaning up a BusMasterCommonBuffer[64] operation), we have
to completely liberate the affected path in Unmap() from dynamic memory
management.
The last remaining piece is the release of the MAP_INFO structure. Rather
than freeing it with FreePool(), recycle it to an internal list. Elements
of this "free list" can be reused for any kind of Map() operation, and can
be freed later, or recycled again.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Upon a MemEncryptSevClearPageEncMask() failure in Map(), it wouldn't be
difficult to release the bounce buffer that was implicitly allocated for
BusMasterRead[64] and BusMasterWrite[64] operations. However, undoing any
partial memory encryption mask changes -- partial page splitting and PTE
modifications -- is practically impossible. (For example, restoring the
encryption mask on the entire range has no reason to fare any better than
the MemEncryptSevClearPageEncMask() call itself.)
For this reason, keep ASSERT_EFI_ERROR(), but hang in RELEASE builds too,
if MemEncryptSevClearPageEncMask() or MemEncryptSevSetPageEncMask() fails.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
At the moment, we have the following distribution of actions between the
IOMMU protocol member functions:
- AllocateBuffer() allocates pages and clears the memory encryption mask.
- FreeBuffer() re-sets the memory encryption mask, and deallocates pages.
- Map() does nothing at all when BusMasterCommonBuffer[64] is requested
(and AllocateBuffer() was called previously). Otherwise, Map() allocates
pages, and clears the memory encryption mask.
- Unmap() does nothing when cleaning up a BusMasterCommonBuffer[64]
operation. Otherwise, Unmap() clears the encryption mask, and frees the
pages.
This is wrong: the AllocateBuffer() protocol member is not expected to
produce a buffer that is immediately usable, and client code is required
to call Map() unconditionally, even if BusMasterCommonBuffer[64] is the
desired operation. Implement the right distribution of actions as follows:
- AllocateBuffer() allocates pages and does not touch the encryption mask.
- FreeBuffer() deallocates pages and does not touch the encryption mask.
- Map() does not allocate pages when BusMasterCommonBuffer[64] is
requested, and it allocates pages (bounce buffer) otherwise. Regardless
of the BusMaster operation, Map() (and Map() only) clears the memory
encryption mask.
- Unmap() restores the encryption mask unconditionally. If the operation
was BusMasterCommonBuffer[64], then Unmap() does not release the pages.
Otherwise, the pages (bounce buffer) are released.
This approach also ensures that Unmap() can be called from
ExitBootServices() event handlers, for cleaning up
BusMasterCommonBuffer[64] operations. (More specifically, for restoring
the SEV encryption mask on any in-flight buffers, after resetting any
referring devices.) ExitBootServices() event handlers must not change the
UEFI memory map, thus any memory allocation or freeing in Unmap() would
disqualify Unmap() from being called in such a context.
Map()-ing and Unmap()-ing memory for a BusMasterCommonBuffer[64] operation
effectively means in-place decryption and encryption in a SEV context. As
an additional hurdle, section "7.10.8 Encrypt-in-Place" of AMD publication
Nr.24593 implies that we need a separate temporary buffer for decryption
and encryption that will eventually land in-place. Allocating said
temporary buffer in the straightforward way would violate the above
allocation/freeing restrictions on Map()/Unmap(), therefore pre-allocate
this "stash buffer" too in AllocateBuffer(), and free it in FreeBuffer().
To completely rid Unmap() of dynamic memory impact, for
BusMasterCommonBuffer[64] operations, we're going to rework the lifecycle of
the MAP_INFO structures in a later patch.
(The MemEncryptSevSetPageEncMask() call in Unmap() could theoretically
allocate memory internally for page splitting, however this won't happen
in practice: in Unmap() we only restore the memory encryption mask, and
don't genuinely set it. Any page splitting will have occurred in Map()'s
MemEncryptSevClearPageEncMask() call first.)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
There are three issues with the current calculations:
- The initial logic that sets up "DmaMemoryTop" and "AllocateType" checks
for the BusMasterCommonBuffer64 operation in two places. The inner check
for BusMasterCommonBuffer64 will never evaluate to TRUE however, because
the outer check excludes BusMasterCommonBuffer64.
- In order to lower "DmaMemoryTop" to (SIZE_4GB - 1), the outer check
requires that the encrypted (original) buffer cross the 4GB mark. This
is wrong: for BusMasterRead[64] and BusMasterWrite[64] operations, we
unconditionally need a bounce buffer (a decrypted memory area), and for
the 32-bit variants, "DmaMemoryTop" should be lowered regardless of the
location of the original (encrypted) buffer.
- The current logic would be hard to extend for the in-place decryption
that we'll implement in the next patch.
Therefore rework the "MapInfo->PlainTextAddress" setup. No functional
changes beyond said bugfixes.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Whenever we release the plaintext bounce buffer pages that were allocated
implicitly in Map() for BusMasterRead[64] and BusMasterWrite[64], we
restore the encryption mask on them. However, we should also rewrite the
area (fill it with zeros) so that the hypervisor is not left with a
plaintext view of the earlier data.
Similarly, whenever we release the plaintext common buffer pages that were
allocated explicitly in AllocateBuffer() for BusMasterCommonBuffer[64], we
restore the encryption mask on them. However, we should also rewrite the
area (fill it with zeros) so that the hypervisor is not left with a
plaintext view of the earlier data.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
The following library classes are not used by this module, so remove them
from the INF file's [LibraryClasses] section:
- DxeServicesTableLib
- UefiLib
The following library classes are used by this module, so add them to the
INF file's [LibraryClasses] section:
- BaseMemoryLib (e.g. via CopyMem())
- MemoryAllocationLib (e.g. via AllocatePool())
Sort the list of library classes (in both "IoMmuDxe.inf" and
"AmdSevIoMmu.h").
Remove all non-local #include directives from "IoMmuDxe.c"; both C files
of this module include "AmdSevIoMmu.h", and "AmdSevIoMmu.h" includes all
non-local headers already.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
If we cannot install the IOMMU protocol for whatever reason, exit the
driver with an error. The same is already done for the IOMMU Absent
protocol.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
The portable way to print UINTN values is to use the %Lx format specifier,
and to convert the values to UINT64. The second step is currently missing,
add it.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
As a continuation of the last patch, clarify that the area pointed-to by
"HostAddress" is encrypted and hidden from the hypervisor.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
In this particular IOMMU driver, "DeviceAddress" is just as accessible to
the CPU as "HostAddress", the difference is that the area pointed-to by
the former is plain-text and accessible to the hypervisor. Rename
"DeviceAddress" to "PlainTextAddress" in MAP_INFO.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Undo removal of OvmfPkg/License.txt in commit
2a98de0344
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Andrew Fish <afish@apple.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
https://bugzilla.tianocore.org/show_bug.cgi?id=642
Add top level License.txt file with the BSD 2-Clause
License that is used by the majority of the EKD II open
source project content. Merge copyright statements
from the BSD 2-Clause License files in each package
directory and remove the duplication License.txt
file from package directories.
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Andrew Fish <afish@apple.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
https://bugzilla.tianocore.org/show_bug.cgi?id=629
Move Contributions.txt that contains the TianoCore
Contribution Agreement 1.0 to the root of the edk2
repository and remove the duplicate Contributions.txt
files from all packages.
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Andrew Fish <afish@apple.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
NumPages variable was introduced in commit 66c548be50. In this commit
we allocate an intermediate buffer when SEV is enabled. The 'BounceBuffer'
variable points to the intermediate buffer pointer and NumPages variables
stores the number of pages. Later in the code, 'BounceBuffer' variable is
checked to see if we need to free the intermediate buffers. The code looks
correct, suppress the warning.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Reported-by: Gerd Hoffmann <kraxel@redhat.com>
Reported-by: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
[lersek@redhat.com: s/warnigns/warnings/ in the code comment]
[lersek@redhat.com: add Gerd's Reported-by]
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
This patch enables PciHostBridgeDxe driver to use Platform IoMMU detection
library to ensure that PciHostBridgeDxe is run after platform IoMmuDxe
driver has checked whether platform need to install IOMMU protocol provider.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
When SEV is enabled, use a bounce buffer to perform the DMA operation.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Update InternalQemuFwCfgDmaBytes() to work with DMA Access pointer.
The change provides the flexibility to dynamically allocate the "Access"
when SEV is enabled.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
When SEV is enabled, the DMA must be performed on unencrypted pages.
So when get asked to perfom FWCFG DMA read or write, we allocate a
intermediate (bounce buffer) unencrypted buffer and use this buffer
for DMA read or write.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Add SEV specific internal functions which will be used while intergrating
the SEV support into QemuFwCfgLib.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Current QemuFwCfgLib.inf is used in both Pei and Dxe phases. Add Pei
and Dxe inf file to provide a seperate QemuFwCfgLib instances for Pei
and Dxe phases.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The IOMMU protocol driver provides capabilities to set a DMA access
attribute and methods to allocate, free, map and unmap the DMA memory
for the PCI Bus devices.
Due to security reasons all DMA operations inside the SEV guest must
be performed on shared (i.e unencrypted) pages. The IOMMU protocol
driver for the SEV guest uses a bounce buffer to map guest DMA buffer
to shared pages inorder to provide the support for DMA operations inside
SEV guest.
IoMmuDxe driver looks for SEV capabilities, if present then it installs
the real IOMMU protocol otherwise it installs placeholder protocol.
Currently, PciHostBridgeDxe and QemuFWCfgLib need to know the existance
of IOMMU protocol. The modules needing to know the existance of IOMMU
support should add
gEdkiiIoMmuProtocolGuid OR gIoMmuAbsentProtocolGuid
in their depex to ensure that platform IOMMU detection has been performed.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Leo Duran <leo.duran@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Suggested-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
Add the shorter-term library instance outlined in the previous patch to
OvmfPkg, so that we can imbue PciHostBridgeDxe with a protocol dependency
on gEdkiiIoMmuProtocolGuid OR gIoMmuAbsentProtocolGuid.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Platforms that optionally provide an IOMMU protocol should do so by
including a DXE driver (usually called IoMmuDxe) that produces either
the IOMMU protocol -- if the underlying capabilities are available --,
or gIoMmuAbsentProtocolGuid, to signal that the IOMMU capability
detection completed with negative result (i.e., no IOMMU will be
available in the system).
In turn, DXE drivers (and library instances) that are supposed to use
the IOMMU protocol if it is available should add the following to
their DEPEX:
gEdkiiIoMmuProtocolGuid OR gIoMmuAbsentProtocolGuid
This ensures these client modules will only be dispatched after IOMMU
detection completes (with positive or negative result).
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Leo Duran <leo.duran@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Suggested-by: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
When SEV is enabled, the MMIO memory range must be mapped as unencrypted
(i.e C-bit cleared).
We need to clear the C-bit for MMIO GCD entries in order to cover the
ranges that were added during the PEI phase (through memory resource
descriptor HOBs). Additionally, the NonExistent ranges are processed
in order to cover, in advance, MMIO ranges added later in the DXE phase
by various device drivers, via the appropriate DXE memory space services.
The approach is not transparent for later addition of system memory ranges
to the GCD memory space map. (Such ranges should be encrypted.) OVMF does
not do such a thing at the moment, so this approach should be OK.
The driver is being added to the APRIORI DXE file so that, we clear the
C-bit from MMIO regions before any driver accesses it.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Leo Duran <leo.duran@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Suggested-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
Secure Encrypted Virtualization (SEV) guest VMs have the concept of
private and shared memory. Private memory is encrypted with the
guest-specific key, while shared memory may be encrypted with hypervisor
key. Certain types of memory (namely instruction pages and guest page
tables) are always treated as private memory by the hardware.
For data memory, SEV guest VMs can choose which pages they would like
to be private. The choice is done using the standard CPU page tables
using the C-bit. When building the initial page table we mark all the
memory as private.
The patch sets the memory encryption PCD. The PCD is consumed by the
following edk2 modules, which manipulate page tables:
- PEI phase modules: CapsulePei, DxeIplPeim, S3Resume2Pei.
CapsulePei is not used by OVMF. DxeIplPeim consumes the PCD at the
end of the PEI phase, when it builds the initial page tables for the
DXE core / DXE phase. S3Resume2Pei does not consume the PCD in its
entry point function, only when DxeIplPeim branches to the S3 resume
path at the end of the PEI phase, and calls S3Resume2Pei's
EFI_PEI_S3_RESUME2_PPI.S3RestoreConfig2() member function.
Therefore it is safe to set the PCD for these modules in PlatformPei.
- DXE phase modules: BootScriptExecutorDxe, CpuDxe, PiSmmCpuDxeSmm.
They are all dispatched after the PEI phase, so setting the PCD for
them in PlatformPei is safe. (BootScriptExecutorDxe is launched "for
real" in the PEI phase during S3 resume, but it caches the PCD into a
static variable when its entry point is originally invoked in DXE.)
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Add Secure Encrypted Virtualization (SEV) helper library.
The library provides the routines to:
- set or clear memory encryption bit for a given memory region.
- query whether SEV is enabled.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
When SEV is enabled then we must unroll the rep String I/O instructions.
The patch updates dsc file to use SEV version of IoLib inf. The main
difference between BaseIoLibIntrinsic.inf and BaseIoLibIntrinsicSev.inf
is, SEV version checks if its running under SEV enabled guest, If so
then it unroll the String I/O (REP INS/OUTS) otherwise fallbacks to
rep ins/outs.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
SEV guest VMs have the concept of private and shared memory. Private
memory is encrypted with the guest-specific key, while shared memory
may be encrypted with hypervisor key. Certain types of memory (namely
instruction pages and guest page tables) are always treated as private
memory by the hardware. The C-bit in PTE indicate whether the page is
private or shared. The C-bit position for the PTE can be obtained from
CPUID Fn8000_001F[EBX].
When SEV is active, the BIOS is encrypted by the Qemu launch sequence,
we must set the C-bit when building the page table.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Tom Lendacky <Thomas.Lendacky@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
PlatformPei can now overwrite PcdQ35TsegMbytes; document this in
"OvmfPkg/OvmfPkg.dec".
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Recognize an extended TSEG when available in
Q35TsegMbytesInitialization(), and set both PcdQ35TsegMbytes (for
OvmfPkg/SmmAccess) and "mQ35TsegMbytes" (for PlatformPei's own use)
accordingly. The new logic interfaces with the QEMU feature added in QEMU
commit 2f295167e0c4 ("q35/mch: implement extended TSEG sizes",
2017-06-08).
At this point we have to explicitly restrict Q35TsegMbytesInitialization()
to the Q35 board, but that's OK, because Q35TsegMbytesInitialization() is
only called when PcdSmmSmramRequire is set, and for that Q35 is already an
enforced requirement.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
In SmmAccessPeiEntryPoint(), map TSEG megabyte counts different from 1, 2
and 8 to the MCH_ESMRAMC_TSEG_EXT bit pattern (introduced in the previous
patch), for the ESMRAMC.TSEG_SZ bit-field register. (Suggested by Jordan.)
In SmramAccessGetCapabilities() -- backing both
PEI_SMM_ACCESS_PPI.GetCapabilities() and
EFI_SMM_ACCESS2_PROTOCOL.GetCapabilities() --, map the
MCH_ESMRAMC_TSEG_EXT bit pattern found in the ESMRAMC.TSEG_SZ bit-field
register to a byte count of (mQ35TsegMbytes * SIZE_1MB).
(MCH_ESMRAMC_TSEG_EXT is the only possible pattern if none of
MCH_ESMRAMC_TSEG_1MB, MCH_ESMRAMC_TSEG_2MB, and MCH_ESMRAMC_TSEG_8MB
match.)
The new code paths are not exercised just yet; for that, PlatformPei is
going to have to set PcdQ35TsegMbytes (and consequently, SmramInternal's
"mQ35TsegMbytes") to a value different from 1, 2, and 8.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
We can now make PcdQ35TsegMbytes dynamic, in preparation for the extended
TSEG size feature. At the moment we only move the declaration in
OvmfPkg.dec from [PcdsFixedAtBuild] to [PcdsDynamic, PcdsDynamicEx], and
provide the dynamic defaults (with the same value, 8) in the DSC files if
SMM_REQUIRE is TRUE.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
In one of the next patches we'll turn PcdQ35TsegMbytes into a dynamic PCD,
to be set by PlatformPei.
Jordan suggested to use gEfiPeiMemoryDiscoveredPpiGuid as SmmAccessPei's
DEPEX for making sure that PlatformPei sets the PCD before SmmAccessPei
consumes it. (PlatformPei installs the permanent PEI RAM.) Such a DEPEX is
supposed to mirror physical firmware, where anything related to SMRAM
cannot run before said platform's physical RAM is discovered (signaled by
the presence of gEfiPeiMemoryDiscoveredPpiGuid).
Introduce the InitQ35TsegMbytes() function and the "mQ35TsegMbytes" extern
variable to "SmramInternal.h" and "SmramInternal.c":
- Both SmmAccess modules (PEIM and DXE driver) are supposed to call
InitQ35TsegMbytes() in their respective entry point functions, saving
PcdQ35TsegMbytes into "mQ35TsegMbytes". This way dynamic PCD fetches can
be kept out of PEI_SMM_ACCESS_PPI and EFI_SMM_ACCESS2_PROTOCOL member
functions later (when we add support for extended TSEG size).
- We can thus replace the current PcdQ35TsegMbytes fetches in
SmmAccessPei's entry point function as well, with reads from
"mQ35TsegMbytes".
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
In one of the next patches we'll turn PcdQ35TsegMbytes into a dynamic PCD,
to be set by PlatformPei. Introduce the Q35TsegMbytesInitialization()
function and the "mQ35TsegMbytes" global variable to support this.
Q35TsegMbytesInitialization() manages the PCD and caches its final value
into "mQ35TsegMbytes". Call Q35TsegMbytesInitialization() from
InitializePlatform() just in time for the current PCD consumers,
PublishPeiMemory(), InitializeRamRegions() and QemuInitializeRam() --
which is called from InitializeRamRegions() -- to be rebased on top of
"mQ35TsegMbytes".
Call Q35TsegMbytesInitialization() only when PcdSmmSmramRequire is TRUE,
given that PcdQ35TsegMbytes is consumed in that case only.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Widen PcdQ35TsegMbytes to UINT16, in preparation for setting it
dynamically to the QEMU-advertized extended TSEG size (which is 16-bits
wide).
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Jiaxin reports that the OvmfPkg/README instructions for downloading the
Intel PROEFI drivers, and the filenames in OvmfPkg/OvmfPkg*.fdf for
incorporating the same in the OVMF binaries, are no longer up to date; the
download link has stopped working.
Additionally, the IA32 driver binary is no more distributed by Intel.
Update OvmfPkg/README with new download instructions, and adapt the OVMF
FDF files.
With this driver in use for QEMU's e1000 NIC, the DH shell command prints,
as Controller Name, "Intel(R) PRO/1000 MT Network Connection". I
successfully tested DHCP and ping from the UEFI shell.
Cc: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Reported-by: Jiaxin Wu <jiaxin.wu@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=613
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Wu Jiaxin <jiaxin.wu@intel.com>
Reviewed-by: Wu Jiaxin <jiaxin.wu@intel.com>
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
When the GenFv utility from BaseTools composes a firmware volume, it
checks whether modules in the firmware volume are subject to build-time
relocation. The primary indication for relocation is whether the firmware
volume has a nonzero base address, according to the [FD] section(s) in the
FDF file that refer to the firmware volume.
The idea behind build-time relocation is that XIP (execute in place)
modules will not be relocated at boot-time:
- Pre-DXE phase modules generally execute in place.
(OVMF is no exception, despite the fact that we have writeable memory
even in SEC: PEI_CORE and PEIMs run in-place from PEIFV, after SEC
decompresses PEIFV and DXEFV from FVMAIN_COMPACT (flash) to RAM.
PEI_CORE and the PEIMs are relocated at boot-time only after PlatformPei
installs the permanent PEI RAM, and the RAM migration occurs.)
- Modules dispatched by the DXE Core are generally relocated at boot-time.
However, this is not necessarily so. Quoting Liming from
<https://lists.01.org/pipermail/edk2-devel/2017-July/012053.html>:
> PI spec has no limitation that XIP is for PEIM only. DXE driver may be
> built as XIP for other purpose. For example, if DXE driver image address
> is not zero, DxeCore will try allocating the preferred address and load
> it. In another case, once DXE driver is relocated at build time, DxeCore
> will dispatch it and start it directly without loading, it may save boot
> performance.
Therefore GenFv relocates even DXE and UEFI driver modules if the
containing firmware volume has a nonzero base address.
In OVMF, this is the case for both PEIV and DXEFV:
> [FD.MEMFD]
> BaseAddress = $(MEMFD_BASE_ADDRESS)
> Size = 0xB00000
> ErasePolarity = 1
> BlockSize = 0x10000
> NumBlocks = 0xB0
> ...
> 0x020000|0x0E0000
> gUefiOvmfPkgTokenSpaceGuid.PcdOvmfPeiMemFvBase|gUefiOvmfPkgTokenSpaceGuid.PcdOvmfPeiMemFvSize
> FV = PEIFV
>
> 0x100000|0xA00000
> gUefiOvmfPkgTokenSpaceGuid.PcdOvmfDxeMemFvBase|gUefiOvmfPkgTokenSpaceGuid.PcdOvmfDxeMemFvSize
> FV = DXEFV
While the build-time relocation certainly makes sense for PEIFV (see
above), the reasons for which we specify DXEFV under [FD.MEMFD] are
weaker:
- we set the PcdOvmfDxeMemFvBase and PcdOvmfDxeMemFvSize PCDs here,
- and we ascertain that DXEFV, when decompressed by SEC from
FVMAIN_COMPACT, will fit into the area allotted here, at build time.
In other words, the build-time relocation of the modules in DXEFV is a
waste of resources. But, it gets worse:
Build-time relocation of an executable is only possible if the on-disk and
in-memory layouts are identical, i.e., if the sections of the PE/COFF
image adhere to the same alignment on disk and in memory. Put differently,
the FileAlignment and SectionAlignment headers must be equal.
For boot-time modules that we build as part of edk2, both alignment values
are 0x20 bytes. For runtime modules that we build as part of edk2, both
alignment values are 0x1000 bytes. This is why the DXEFV relocation,
albeit wasteful, is also successful every time.
Unfortunately, if we try to include a PE/COFF binary in DXEFV that
originates from outside of edk2, the DXEFV relocation can fail due to the
binary having unmatched FileAlignment and SectionAlignment headers. This
is precisely the case with the E3522X2.EFI network driver for the e1000
NIC, from Intel's BootUtil / PREBOOT.EXE distribution.
The solution is to use the FvForceRebase=FALSE override under [FV.DXEFV].
This tells GenFv not to perform build-time relocation on the firmware
volume, despite the FV having a nonzero base address.
In DXEFV we also have SMM drivers. Those are relocated at boot-time (into
SMRAM) unconditionally; SMRAM is always discovered at boot-time.
Kudos to Ard and Liming for the PE/COFF sections & relocations
explanation, and for the FvForceRebase=FALSE tip.
I regression-tested this change in the following configurations (all with
normal boot and S3 suspend/resume):
IA32, q35, SMM, Linux
IA32X64, q35, SMM, Linux
IA32X64, q35, SMM, Windows-8.1
X64, i440fx, no-SMM, Linux
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=613
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=615
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
Commit 4275f38507 ("OvmfPkg/AcpiPlatformDxe: alloc blobs from 64-bit
space unless restricted") introduced a variable which is [incorrectly]
identified by GCC as being potentially uninitialized. So let's just set
it to NULL before use.
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
... by narrower than 8-byte ADD_POINTER references.
Introduce the CollectAllocationsRestrictedTo32Bit() function, which
iterates over the linker/loader script, and collects the names of the
fw_cfg blobs that are referenced by QEMU_LOADER_ADD_POINTER.PointeeFile
fields, such that QEMU_LOADER_ADD_POINTER.PointerSize is less than 8. This
means that the pointee blob's address will have to be patched into a
narrower-than-8 byte pointer field, hence the pointee blob must not be
allocated from 64-bit address space.
In ProcessCmdAllocate(), consult these restrictions when setting the
maximum address for gBS->AllocatePages(). The default is now MAX_UINT64,
unless restricted like described above to the pre-patch MAX_UINT32 limit.
In combination with Ard's QEMU commit cb51ac2ffe36 ("hw/arm/virt: generate
64-bit addressable ACPI objects", 2017-04-10), this patch enables
OvmfPkg/AcpiPlatformDxe to work entirely above the 4GB mark.
(An upcoming / planned aarch64 QEMU machine type will have no RAM under
4GB at all. Plus, moving the allocations higher is beneficial to the
current "virt" machine type as well; in Ard's words: "having all firmware
allocations inside the same 1 GB (or 512 MB for 64k pages) frame reduces
the TLB footprint".)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Igor Mammedov <imammedo@redhat.com>
Suggested-by: Gerd Hoffmann <kraxel@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Xen gained support for the 4MB flash image in Xen commit 0d6968635ce5
("hvmloader: avoid tests when they would clobber used memory",
2017-05-19), which is part of Xen 4.9.0-rc6.
The previously default 2MB can be explicitly selected with
-D FD_SIZE_2MB
or
-D FD_SIZE_IN_KB=2048
Cc: Jordan Justen <jordan.l.justen@intel.com>
Suggested-by: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
(cherry picked from commit bba8dfbec3)
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
[lersek@redhat.com: reference Xen commit in commit message]
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=559
The XCODE5 tool chain has a FAMILY of GCC. The
GCC statements in the [BuildOptions] section add
flags that are not compatible with XCODE5. Add
empty XCODE5 statements in [BuildOptions] sections
to prevent the use of the GCC flags in XCODE5
builds.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Andrew Fish <afish@apple.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
EmuVariableFvbRuntimeDxe now uses a 4KB (EFI_PAGE_SIZE) block size.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Gary Lin <glin@suse.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
EmuVariableFvbRuntimeDxe currently produces a Firmware Volume Block
protocol that is based on a block map of two blocks, each block having
PcdFlashNvStorageFtwSpareSize for size.
(The total size is 2 * PcdFlashNvStorageFtwSpareSize.)
FaultTolerantWriteDxe in turn expects the block size to be a power of two.
In the 4MB build of OVMF, PcdFlashNvStorageFtwSpareSize is 264KB, which is
not a power of two. In order to equip EmuVariableFvbRuntimeDxe for this
build, shrink the block size to 4KB (EFI_PAGE_SIZE), and grow the block
count from 2 to EFI_SIZE_TO_PAGES(2 * PcdFlashNvStorageFtwSpareSize). The
total size remains
2 * PcdFlashNvStorageFtwSpareSize
--------------------------------- * EFI_PAGE_SIZE
EFI_PAGE_SIZE
Right now EmuVariableFvbRuntimeDxe open-codes the block count of 2 in
various limit checks, so introduce a few new macros:
- EMU_FVB_NUM_TOTAL_BLOCKS, for the LHS of the above product,
- EMU_FVB_NUM_SPARE_BLOCKS for the half of that.
Also rework the FVB protocol members to support an arbitrary count of
blocks.
Keep the invariant intact that the first half of the firmware volume hosts
the variable store and the FTW working block, and that the second half
maps the FTW spare area.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=527
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Gary Lin <glin@suse.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
According to the PI spec, Volume 3,
EFI_FIRMWARE_VOLUME_BLOCK2_PROTOCOL.EraseBlocks():
> The variable argument list is a list of tuples. Each tuple describes a
> range of LBAs to erase and consists of the following:
> * An EFI_LBA that indicates the starting LBA
> * A UINTN that indicates the number of blocks to erase
(NB, in edk2, EFI_FIRMWARE_VOLUME_BLOCK_PROTOCOL is a typedef to
EFI_FIRMWARE_VOLUME_BLOCK2_PROTOCOL.)
In this driver, the NumOfLba local variable is defined with type UINTN,
but the TYPE argument passed to VA_ARG() is UINT32. Fix the mismatch.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Reported-by: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
According to the PI spec, Volume 3,
EFI_FIRMWARE_VOLUME_BLOCK2_PROTOCOL.EraseBlocks():
> The variable argument list is a list of tuples. Each tuple describes a
> range of LBAs to erase and consists of the following:
> * An EFI_LBA that indicates the starting LBA
> * A UINTN that indicates the number of blocks to erase
(NB, in edk2, EFI_FIRMWARE_VOLUME_BLOCK_PROTOCOL is a typedef to
EFI_FIRMWARE_VOLUME_BLOCK2_PROTOCOL.)
In this driver, the NumOfLba local variable is defined with type UINTN,
but the TYPE argument passed to VA_ARG() is UINT32. Fix the mismatch.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
In the previous patch we had to add two explicit Null resolutions, but
here we can remove five PeiPcdLib ones, after setting the default to it.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>