Ensure that when a #VC exception happens, the instruction at the
instruction pointer matches the instruction that is expected given the
error code. This is to mitigate the ahoi WeSee attack [1] that could
allow hypervisors to breach integrity and confidentiality of the
firmware by maliciously injecting interrupts. This change is a
translated version of a linux patch e3ef461af35a ("x86/sev: Harden #VC
instruction emulation somewhat")
[1] https://ahoi-attacks.github.io/wesee/
Cc: Borislav Petkov (AMD) <bp@alien8.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Adam Dunlap <acdunlap@google.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
Currently, an SEV-SNP guest will terminate if it is not running at VMPL0.
The requirement for running at VMPL0 is removed if an SVSM is present.
Update the current VMPL0 check to additionally check for the presence of
an SVSM is the guest is not running at VMPL0.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
The SVSM specification documents an alternative method of discovery for
the SVSM using a reserved CPUID bit and a reserved MSR.
For the CPUID support, the #VC handler of an SEV-SNP guest should modify
the returned value in the EAX register for the 0x8000001f CPUID function
by setting bit 28 when an SVSM is present.
For the MSR support, new reserved MSR 0xc001f000 has been defined. A #VC
should be generated when accessing this MSR. The #VC handler is expected
to ignore writes to this MSR and return the physical calling area address
(CAA) on reads of this MSR.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
The RMPADJUST instruction is used to alter the VMSA attribute of a page,
but the VMSA attribute can only be changed when running at VMPL0. When
an SVSM is present, use the SVSM_CORE_CREATE_VCPU and SVSM_CORE_DELTE_VCPU
calls to add or remove the VMSA attribute on a page instead of issuing
the RMPADJUST instruction directly.
Implement the AmdSvsmSnpVmsaRmpAdjust() API to perform the proper operation
to update the VMSA attribute.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
Similar to the Page State Change optimization added previously, also take
into account the possiblity of using the SVSM for PVALIDATE instructions.
Conditionally adjust the maximum number of entries based on how many
entries the SVSM calling area can support.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
The PVALIDATE instruction can only be performed at VMPL0. An SVSM will
be present when running at VMPL1 or higher.
When an SVSM is present, use the SVSM_CORE_PVALIDATE call to perform
memory validation instead of issuing the PVALIDATE instruction directly.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
The PVALIDATE instruction is used to change the SNP validation of a page,
but that can only be done when running at VMPL0. To prepare for running at
a less priviledged VMPL, use the AmdSvsmLib library API to perform the
PVALIDATE. The AmdSvsmLib library will perform the proper operation on
behalf of the caller.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
Add initial support for the new AmdSvsmLib library to OvmfPkg. The initial
implementation fully implements the library interfaces.
The SVSM presence check, AmdSvsmIsSvsmPresent(), determines the presence
of an SVSM by checking if an SVSM has been advertised in the SEV-SNP
Secrets Page.
The VMPL API, AmdSvsmSnpGetVmpl(), returns the VMPL level at which OVMF is
currently running.
The CAA API, AmdSvsmSnpGetCaa(), returns the Calling Area Address when an
SVSM is present, 0 otherwise.
The PVALIDATE API, AmdSvsmSnpPvalidate(), copies the PVALIDATE logic from
the BaseMemEncryptSevLib library for the initial implementation. The
BaseMemEncryptSevLib library will be changed to use this new API so that
the decision as to whether the SVSM is needed to perform the operation
can be isolated to this library.
The VMSA API, AmdSvsmSnpVmsaRmpAdjust(), copies the RMPUPDATE logic from
the MpInitLib library for the initial implementation. The MpInitLib
library will be changed to use this new API so that the decision as to
whether the SVSM is needed to perform the operation can be isolated to
this library.
Cc: Anatol Belski <anbelski@linux.microsoft.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jianyong Wu <jianyong.wu@arm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
When building the Page State Change entries for a range of memory, it can
happen that multiple calls to BuildPageStateBuffer() need to be made. If
the size of the input work area passed to BuildPageStateBuffer() exceeds
the number of entries that can be passed to the hypervisor using the GHCB
shared buffer, the Page State Change VMGEXIT support will issue multiple
VMGEXITs to process all entries in the buffer.
However, it could be that the final VMGEXIT for each round of Page State
Changes is only for a small number of entries and subsequent VMGEXITs may
still be issued to handle the full range of memory requested. To maximize
the number of entries processed during the Page State Change VMGEXIT,
limit BuildPageStateBuffer() to not build entries that exceed the maximum
number of entries that can be handled in a single Page State Change
VMGEXIT.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
In preparation for running under an SVSM at VMPL1 or higher (higher
numerically, lower privilege), re-organize the way a page state change
is performed in order to free up the GHCB for use by the SVSM support.
Currently, the page state change logic directly uses the GHCB shared
buffer to build the page state change structures. However, this will be
in conflict with the use of the GHCB should an SVSM call be required.
Instead, use a separate buffer (an area in the workarea during SEC and
an allocated page during PEI/DXE) to hold the page state change request
and only update the GHCB shared buffer as needed.
Since the information is copied to, and operated on, in the GHCB shared
buffer this has the added benefit of not requiring to save the start and
end entries for use when validating the memory during the page state
change sequence.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
Calculate the amount of memory that can be use to build the Page State
Change data (SNP_PAGE_STATE_CHANGE_INFO) instead of using a hard-coded
size. This allows for changes to the GHCB shared buffer size without
having to make changes to the page state change code.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
In prep for follow-on patches, fix an area of the code that does not meet
the uncrustify coding standards.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
The AsmRmpAdjust() function returns a UINT32, however in SevSnpIsVmpl0()
the return value is checked with EFI_ERROR() when it should just be
compared to 0. Fix the error check.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4752
This library is the one of SecurityPkg/Library/HashLibTdx. It is
designed for Intel TDX enlightened OVMF. So moving it from SecurityPkg
to OvmfPkg. To prevent breaking the build, the moving is splitted into 2
patch. SecurityPkg/Library/HashLibTdx will be deleted in the next patch.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4696
Refer to the [GHCI] spec, TDVF should clear the BIT5 for RBP in the mask.
Reference:
[GHCI]: TDX Guest-Host-Communication Interface v1.5
https://cdrdv2.intel.com/v1/dl/getContent/726792
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Ceping Sun <cepingx.sun@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Min Xu <min.m.xu@intel.com>
Adjust physical address space logic for la57 mode (5-level paging).
With a larger logical address space we can identity-map a larger
physical address space.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Message-Id: <20240222105407.75735-4-kraxel@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Oliver Steffen <osteffen@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
[lersek@redhat.com: turn the "Cc:" message headers from Gerd's on-list
posting into "Cc:" tags in the commit message, in order to pacify
"PatchCheck.py"]
XenRealTimeClockLib is used to back the runtime services time functions,
so align the description of the function return values with the
defined values for these services as described in UEFI Spec 2.10.
REF: UEFI spec 2.10 section 8 Services ? Runtime Services
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Suqiang Ren <suqiangx.ren@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Moved the PlatformBootManagerLib to OvmfPkg and renamed to
PlatformBootManagerLibLight for easy use by other ARCH.
Build-tested only (with "ArmVirtQemu.dsc and OvmfPkgX64.dsc").
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4663
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Lazlo Ersek <lersek@redhat.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Move the FdtSerialPortAddressLib to Ovmfpkg so that other ARCH can
easily use it.
Build-tested only (with "ArmVirtQemu.dsc and OvmfPkgX64.dsc").
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4584
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
In addition to initializing the PhysMemAddressWidth and
FirstNonAddress fields in PlatformInfoHob, the
PlatformAddressWidthInitialization function is responsible
for initializing the PcdPciMmio64Base and PcdPciMmio64Size
fields.
Currently, for CloudHv guests, the PcdPciMmio64Base is
placed immediately after either the 4G boundary or the
last RAM region, whichever is greater. We do not change
this behavior.
Previously, when booting CloudHv guests with greater than
1TiB of high memory, the PlatformAddressWidthInitialization
function incorrect calculates the amount of RAM using the
overflowed 24-bit CMOS register.
Now, we update the PlatformAddressWidthInitialization
behavior on CloudHv to scan the E820 entries to detect
the amount of RAM. This allows CloudHv guests to boot with
greater than 1TiB of RAM
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
The PlatformScanE820 utility function is not currently compatible
with CloudHv since it relies on the prescence of the "etc/e820"
QemuFwCfg file. Update the PlatformScanE820 to iterate through the
PVH e820 entries when running on a CloudHv guest.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
The struct used for GHCB-based page-state change requests uses a 40-bit
bit-field for the GFN, which is shifted by PAGE_SHIFT to generate a
64-bit address. However, anything beyond 40-bits simply gets shifted off
when doing this, which will cause issues when dealing with 1TB+
addresses. Fix this by casting the 40-bit GFN values to 64-bit ones
prior to shifting it by PAGE_SHIFT.
Fixes: ade62c18f4 ("OvmfPkg/MemEncryptSevLib: add support to validate system RAM")
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20231115175153.813213-1-michael.roth@amd.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4572
According to section 3.2 of the [GHCI] document, if the return status
of MapGPA is "TDG.VP.VMCALL_RETRY", TD must retry this operation for the
pages in the region starting at the GPA specified in R11.
In this patch, when a retry state is detected, TDVF needs to retry the
mapping with the specified address from the output results of TdVmCall.
Reference:
[GHCI]: TDX Guest-Host-Communication Interface v1.0
https://cdrdv2.intel.com/v1/dl/getContent/726790
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Ceping Sun <cepingx.sun@intel.com>
"OvmfPkg/Include/IndustryStandard/Virtio095.h" defines the macro
VIRTIO_SUBSYSTEM_CONSOLE with value 3; other locations in the tree already
use it (such as ArmVirtPkg/PlatformBootManagerLib,
OvmfPkg/VirtioSerialDxe). We should use it in
OvmfPkg/PlatformBootManagerLib too, rather than the naked constant 3.
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
The qemu/kvm SMM emulation uses the AMD SaveState layout.
So, now that we have AMD SaveState support merged we can just use
Amd/SmramSaveStateMap.h, QemuSmramSaveStateMap.h is not needed any more.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Add DetectAndPreparePlatformVirtioDevicePath() helper function
to setup virtio-mmio devices. Start with virtio-serial support.
This makes virtio console usable with microvm.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
SECURE_BOOT_FEATURE_ENABLED was dropped by the commit(92da8a154f), but the
PeilessStartupLib was not updated with PcdSecureBootSupported, that made
SecureBoot no longer work in IntelTdxX64.
Fix this by replacing SECURE_BOOT_FEATURE_ENABLED with
PcdSecureBootSupported in PeilessStartupLib.
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Michael Roth <michael.roth@amd.com>
Signed-off-by: Ceping Sun <cepingx.sun@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Remove code that sets AddressEncMask for non-leaf entries when
modifing smm page table by MemEncryptSevLib. In FvbServicesSmm
driver, it calls MemEncryptSevClearMmioPageEncMask to clear
AddressEncMask bit in page table for a specific range. In AMD
SEV feature, this AddressEncMask bit in page table is used to
indicate if the memory is guest private memory or shared memory.
But all memory accessed by the hardware page table walker is
treated as encrypted, regardless of whether the encryption bit
is present. So remove the code to set the EncMask bit for smm
non-leaf entries doesn't impact AMD SEV feature.
The reason encryption mask should not be set for non-leaf
entries is because CpuPageTableLib doesn't consume encryption
mask PCD. In PiSmmCpuDxeSmm module, it will use CpuPageTableLib
to modify smm page table in next patch. The encryption mask is
overlapped with the PageTableBaseAddress field of non-leaf page
table entries. If the encryption mask is set for smm non-leaf
page table entries, issue happens when CpuPageTableLib code
use the non-leaf entry PageTableBaseAddress field with the
encryption mask set to find the next level page table.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
This makes the InstallQemuFwcfgTables function reusable by bhyve.
Signed-off-by: Corvin Köhne <corvink@FreeBSD.org>
Acked-by: Peter Grehan <grehan@freebsd.org>
This is required to move InstallQemuFwCfgTables into AcpiPlatformLib.
Signed-off-by: Corvin Köhne <corvink@FreeBSD.org>
Acked-by: Peter Grehan <grehan@freebsd.org>
Bhyve supports providing ACPI tables by FwCfg. Therefore,
InstallQemuFwCfgTables should be moved to AcpiPlatformLib to reuse the
code. As first step, move PciEncoding into AcpiPlatformLib.
Signed-off-by: Corvin Köhne <corvink@FreeBSD.org>
Acked-by: Peter Grehan <grehan@freebsd.org>
This makes the function reuseable by bhyve.
Signed-off-by: Corvin Köhne <corvink@FreeBSD.org>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Xen and bhyve are placing ACPI tables into system memory. So, they can
share the same code. Therefore, create a new library which searches and
installs ACPI tables from system memory.
Signed-off-by: Corvin Köhne <corvink@FreeBSD.org>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Edk2 was failing, rather than creating more PML4 entries, when they
weren't present in the initial memory acceptance flow. Because of that
VMs with more than 512G memory were crashing. This code fixes that.
This change affects only SEV-SNP VMs.
The code was tested by successfully booting a 512G SEV-SNP VM.
Signed-off-by: Mikolaj Lisik <lisik@google.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
Older linux kernels have problems with phys-bits larger than 46,
ubuntu 18.04 (kernel 4.15) has been reported to be affected.
Reduce phys-bits limit from 47 to 46.
Reported-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
If PcdUse1GPageTable is not enabled restrict the physical address space
used to 1TB, to limit the amount of memory needed for identity mapping
page tables.
The same already happens in case the processor has no support for
gigabyte pages.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Not used any more, remove.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Jiewen Yao <Jiewen.yao@intel.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
In case PcdBootRestrictToFirmware is set, disable loading EFI variables
from NvVars file.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Add new PCD PcdBootRestrictToFirmware. When set to TRUE restrict
boot options to EFI applications embedded into the firmware image.
Behavior should be identical to the PlatformBootManagerLibGrub
library variant.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Jiewen Yao <Jiewen.yao@intel.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
At TPL_HIGH_LEVEL, CPU interrupts are disabled (as per the UEFI
specification) and so we should never encounter a situation in which
an interrupt occurs at TPL_HIGH_LEVEL. The specification also
restricts usage of TPL_HIGH_LEVEL to the firmware itself.
However, nothing actually prevents a UEFI application from calling
gBS->RaiseTPL(TPL_HIGH_LEVEL) and then violating the invariant by
enabling interrupts via the STI or equivalent instruction. Some
versions of the Microsoft Windows bootloader are known to do this.
NestedInterruptTplLib maintains the invariant that interrupts are
disabled at TPL_HIGH_LEVEL (even when performing the dark art of
deliberately manipulating the stack so that IRET will return with
interrupts still disabled), but does not itself rely on external code
maintaining this invariant.
Relax the assertion that the interrupted TPL is below TPL_HIGH_LEVEL
to an error message, to allow UEFI applications such as these versions
of the Microsoft Windows bootloader to continue to function.
Debugged-by: Gerd Hoffmann <kraxel@redhat.com>
Debugged-by: Laszlo Ersek <lersek@redhat.com>
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=2189136
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
NestedInterruptTplLib relies on CPU interrupts being disabled to
guarantee exclusive (and hence atomic) access to the shared state in
IsrState. Nothing in the calling interrupt handler should have
re-enabled interrupts before calling NestedInterruptRestoreTPL(), and
the loop in NestedInterruptRestoreTPL() itself maintains the invariant
that interrupts are disabled at the start of each iteration.
Add assertions to clarify this invariant, and expand the comments
around the calls to RestoreTPL() and DisableInterrupts() to clarify
the expectations around enabling and disabling interrupts.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Currently OVMF tries to rely on the base size advertised via the CPUID
table entries corresponding to leaf 0xD, sub-leafs 0x0/0x1. This will
generally work for KVM guests, but might not for other SEV-SNP
hypervisor implementations. Make the handling more robust by simply
using the base area size documented by the APM.
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Jiewen Yao <jiewen.yao@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
CPUID leaf 0xD sub-leafs 0x0 and 0x1 contain cumulative sizes for the
enabled XSave areas. Those sizes are calculated by tallying up all the
other sub-leafs that contain per-area size information for XSave areas
that are currently enabled in XCr0/XSS. The current check has the logic
inverted. Fix that.
This doesn't seem to cause problems currently, but could in the future
if OVMF made more extensive use of XSave areas. It was noticed while
implementing SNP-related tests for KVM Unit Tests, which re-uses the
OVMF #VC handler in some cases.
Reported-by: Pavan Kumar Paluri <papaluri@amd.com>
Cc: Pavan Kumar Paluri <papaluri@amd.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Jiewen Yao <jiewen.yao@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
__FUNCTION__ is a pre-standard extension that gcc and Visual C++ among
others support, while __func__ was standardized in C99.
Since it's more standard, replace __FUNCTION__ with __func__ throughout
OvmfPkg.
Signed-off-by: Rebecca Cran <rebecca@bsdio.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
This patch substitutes the macros that were renamed in the second
patch with the new, shared alignment macros.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@intel.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
This patch is a preparation for the patches that follow. The
subsequent patches will introduce and integrate new alignment-related
macros, which collide with existing definitions in OvmfPkg.
Temporarily rename them to avoid build failure, till they can be
substituted with the new, shared definitions.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@intel.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>