BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
Similar to the Page State Change optimization added previously, also take
into account the possiblity of using the SVSM for PVALIDATE instructions.
Conditionally adjust the maximum number of entries based on how many
entries the SVSM calling area can support.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
The PVALIDATE instruction is used to change the SNP validation of a page,
but that can only be done when running at VMPL0. To prepare for running at
a less priviledged VMPL, use the AmdSvsmLib library API to perform the
PVALIDATE. The AmdSvsmLib library will perform the proper operation on
behalf of the caller.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
When building the Page State Change entries for a range of memory, it can
happen that multiple calls to BuildPageStateBuffer() need to be made. If
the size of the input work area passed to BuildPageStateBuffer() exceeds
the number of entries that can be passed to the hypervisor using the GHCB
shared buffer, the Page State Change VMGEXIT support will issue multiple
VMGEXITs to process all entries in the buffer.
However, it could be that the final VMGEXIT for each round of Page State
Changes is only for a small number of entries and subsequent VMGEXITs may
still be issued to handle the full range of memory requested. To maximize
the number of entries processed during the Page State Change VMGEXIT,
limit BuildPageStateBuffer() to not build entries that exceed the maximum
number of entries that can be handled in a single Page State Change
VMGEXIT.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
In preparation for running under an SVSM at VMPL1 or higher (higher
numerically, lower privilege), re-organize the way a page state change
is performed in order to free up the GHCB for use by the SVSM support.
Currently, the page state change logic directly uses the GHCB shared
buffer to build the page state change structures. However, this will be
in conflict with the use of the GHCB should an SVSM call be required.
Instead, use a separate buffer (an area in the workarea during SEC and
an allocated page during PEI/DXE) to hold the page state change request
and only update the GHCB shared buffer as needed.
Since the information is copied to, and operated on, in the GHCB shared
buffer this has the added benefit of not requiring to save the start and
end entries for use when validating the memory during the page state
change sequence.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
Calculate the amount of memory that can be use to build the Page State
Change data (SNP_PAGE_STATE_CHANGE_INFO) instead of using a hard-coded
size. This allows for changes to the GHCB shared buffer size without
having to make changes to the page state change code.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
In prep for follow-on patches, fix an area of the code that does not meet
the uncrustify coding standards.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Min Xu <min.m.xu@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
The struct used for GHCB-based page-state change requests uses a 40-bit
bit-field for the GFN, which is shifted by PAGE_SHIFT to generate a
64-bit address. However, anything beyond 40-bits simply gets shifted off
when doing this, which will cause issues when dealing with 1TB+
addresses. Fix this by casting the 40-bit GFN values to 64-bit ones
prior to shifting it by PAGE_SHIFT.
Fixes: ade62c18f4 ("OvmfPkg/MemEncryptSevLib: add support to validate system RAM")
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20231115175153.813213-1-michael.roth@amd.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
__FUNCTION__ is a pre-standard extension that gcc and Visual C++ among
others support, while __func__ was standardized in C99.
Since it's more standard, replace __FUNCTION__ with __func__ throughout
OvmfPkg.
Signed-off-by: Rebecca Cran <rebecca@bsdio.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
This patch substitutes the macros that were renamed in the second
patch with the new, shared alignment macros.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@intel.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
This patch is a preparation for the patches that follow. The
subsequent patches will introduce and integrate new alignment-related
macros, which collide with existing definitions in OvmfPkg.
Temporarily rename them to avoid build failure, till they can be
substituted with the new, shared definitions.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@intel.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4123
APIs which are defined in CcExitLib.h are added with the CcExit prefix.
This is to make the APIs' name more meaningful.
This change impacts OvmfPkg/UefiCpuPkg.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4123
VmgExitLib once was designed to provide interfaces to support #VC handler
and issue VMGEXIT instruction. After TDVF (enable TDX feature in OVMF) is
introduced, this library is updated to support #VE as well. Now the name
of VmgExitLib cannot reflect what the lib does.
This patch renames VmgExitLib to CcExitLib (Cc means Confidential
Computing). This is a simple renaming and there is no logic changes.
After renaming all the VmgExitLib related codes are updated with
CcExitLib. These changes are in OvmfPkg/UefiCpuPkg/UefiPayloadPkg.
Cc: Guo Dong <guo.dong@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: James Lu <james.lu@intel.com>
Reviewed-by: Gua Guo <gua.guo@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3275
Virtual Machine Privilege Level (VMPL) feature in the SEV-SNP
architecture allows a guest VM to divide its address space into four
levels. The level can be used to provide the hardware isolated
abstraction layers with a VM. The VMPL0 is the highest privilege, and
VMPL3 is the least privilege. Certain operations must be done by the
VMPL0 software, such as:
* Validate or invalidate memory range (PVALIDATE instruction)
* Allocate VMSA page (RMPADJUST instruction when VMSA=1)
The initial SEV-SNP support assumes that the guest is running on VMPL0.
Let's add function in the MemEncryptSevLib that can be used for checking
whether guest is booted under the VMPL0.
Cc: Michael Roth <michael.roth@amd.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3275
Many of the integrity guarantees of SEV-SNP are enforced through the
Reverse Map Table (RMP). Each RMP entry contains the GPA at which a
particular page of DRAM should be mapped. The guest can request the
hypervisor to add pages in the RMP table via the Page State Change VMGEXIT
defined in the GHCB specification section 2.5.1 and 4.1.6. Inside each RMP
entry is a Validated flag; this flag is automatically cleared to 0 by the
CPU hardware when a new RMP entry is created for a guest. Each VM page
can be either validated or invalidated, as indicated by the Validated
flag in the RMP entry. Memory access to a private page that is not
validated generates a #VC. A VM can use the PVALIDATE instruction to
validate the private page before using it.
During the guest creation, the boot ROM memory is pre-validated by the
AMD-SEV firmware. The MemEncryptSevSnpValidateSystemRam() can be called
during the SEC and PEI phase to validate the detected system RAM.
One of the fields in the Page State Change NAE is the RMP page size. The
page size input parameter indicates that either a 4KB or 2MB page should
be used while adding the RMP entry. During the validation, when possible,
the MemEncryptSevSnpValidateSystemRam() will use the 2MB entry. A
hypervisor backing the memory may choose to use the different page size
in the RMP entry. In those cases, the PVALIDATE instruction should return
SIZEMISMATCH. If a SIZEMISMATCH is detected, then validate all 512-pages
constituting a 2MB region.
Upon completion, the PVALIDATE instruction sets the rFLAGS.CF to 0 if
instruction changed the RMP entry and to 1 if the instruction did not
change the RMP entry. The rFlags.CF will be 1 only when a memory region
is already validated. We should not double validate a memory
as it could lead to a security compromise. If double validation is
detected, terminate the boot.
Cc: Michael Roth <michael.roth@amd.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Jiewen Yao <Jiewen.yao@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>