BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
The SVSM specification relies on a specific register calling convention to
hold the parameters that are associated with the SVSM request. The SVSM is
invoked by requesting the hypervisor to run the VMPL0 VMSA of the guest
using the GHCB MSR Protocol or a GHCB NAE event.
Create a new version of the VMGEXIT instruction that will adhere to this
calling convention and load the SVSM function arguments into the proper
register before invoking the VMGEXIT instruction. On return, perform the
atomic exchange on the SVSM call pending value as specified in the SVSM
specification.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4696
Refer to the [GHCI] spec, TDVF should clear the BIT5 for RBP in the mask.
Reference:
[GHCI]: TDX Guest-Host-Communication Interface v1.5
https://cdrdv2.intel.com/v1/dl/getContent/726792
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Michael Roth <michael.roth@amd.com>
Cc: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Ceping Sun <cepingx.sun@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Add IoCsrRead8, IoCsrRead16, IoCsrRead32, IoCsrRead64, IoCsrWrite8,
IoCsrWrite16, IoCsrWrite32, IoCsrWrite64 to operate the IOCSR registers
of LoongArch architecture.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4584
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Acked-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Add CsrRead, CsrWrite and CsrXChg functions for LoongArch, and use them
to operate the CSR register of LoongArch architecture.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4584
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Co-authored-by: Bibo Mao <maobibo@loongson.cn>
Acked-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Add LoongArch AsmCpucfg function and Cpucfg definitions.
Also added Include/Register/LoongArch64/Cpucfg.h to IgnoreFiles of
EccCheck.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4584
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Acked-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Adding LoongArch local interrupt function set, which is used to control
the opening or closing of the local interrupt when the global interrupt
is enabled.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4584
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Acked-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Adding SetExceptionBaseAddress and SetTlbRebaseAddress functions
for LoongArch64.
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4584
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Acked-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
For scene of
HandOffToDxeCore()->SwitchStack(DxeCoreEntryPoint)->
InternalSwitchStack()->LongJump(),Variable HobList.Raw
will be passed (from *Context1 to register a0) to
DxeMain() in parameter *HobStart.
However, meanwhile the function LongJump() overrides
register a0 with a1 (-1) due to commit (ea628f28e5 "RISCV: Fix
InternalLongJump to return correct value"), then cause hang.
Replacing calling LongJump() with new InternalSwitchStackAsm() to pass
addres data in register s0 to register a0 could fix this issue (just
like the solution in MdePkg/Library/BaseLib/AArch64/SwitchStack.S)
Signed-off-by: Yang Wang <wangyang@bosc.ac.cn>
Cc: Bamvor Jian ZHANG <zhangjian@bosc.ac.cn>
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Ran Wang <wangran@bosc.ac.cn>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
stimecmp is a CSR supported only when Sstc extension is supported by the
platform. This register can be used to set the timer interrupt directly in
S-mode instead of going via SBI call. Add a function to update this
register.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
Implement Cache Management Operations (CMO) defined by
RISC-V spec https://github.com/riscv/riscv-CMOs.
Notes:
1. CMO only supports block based Operations. Meaning cache
flush/invd/clean Operations are not available for the entire
range. In that case we fallback on fence.i instructions.
2. Operations are implemented using Opcodes to make them compiler
independent. binutils 2.39+ compilers support CMO instructions.
Test:
1. Ensured correct instructions are refelecting in asm
2. Qemu implements basic support for CMO operations in that it allwos
instructions without exceptions. Verified it works properly in
that sense.
3. SG2042Pkg implements CMO-like instructions. It was verified that
CpuFlushCpuDataCache works fine. This more of less
confirms that framework is alright.
4. TODO: Once Silicon is available with exact instructions, we will
further verify this.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Pedro Falcato <pedro.falcato@gmail.com>
Signed-off-by: Dhaval Sharma <dhaval@rivosinc.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Sunil V L <sunilvl@...>
Reviewed-by: Jingyu Li <jingyu.li01@...>
There are different ways to manage cache on RISC-V Processors.
One way is to use fence instruction. Another way is to use CPU
specific cache management operation instructions ratified as
per RISC-V ISA specifications to be introduced in future
patches. Current method is fence instruction based, rename the
function accordingly to add that clarity.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Pedro Falcato <pedro.falcato@gmail.com>
Signed-off-by: Dhaval Sharma <dhaval@rivosinc.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4609
The current CalculateCrc16Ansi implementation does the following:
1) Invert the passed checksum
2) Calculate the new checksum by going through data and using the
lookup table
3) Invert it back again
This emulated my design for CalculateCrc32c, where 0 is
passed as the initial checksum, and it inverts in the end.
However, CRC16 does not invert the checksum on input and output.
So this is incorrect.
Fix the problem by not inverting input checksums nor output checksums.
Callers should now pass CRC16ANSI_INIT as the initial value instead of
"0". This is a breaking change.
This problem was found out-of-list when older ext4 filesystems
(that use crc16 checksums) failed to mount with "corruption".
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Pedro Falcato <pedro.falcato@gmail.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4572
According to section 3.2 of the [GHCI] spec, if the return status
of MapGPA is "TDG.VP.VMCALL_RETRY", TD must retry this operation
for the pages in the region starting at the GPA specified in R11.
Currently, TDVF has not handled the retry results and always clears
the R11 on unsuccessful return status. For this, the TdVmcall needs
to output the value of R11 on unsuccessful return status to handle
the retry results of MapGPA.
Reference:
[GHCI]: TDX Guest-Host-Communication Interface v1.0
https://cdrdv2.intel.com/v1/dl/getContent/726790
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Michael Roth <michael.roth@amd.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Ceping Sun <cepingx.sun@intel.com>
The ARM implementation of InternalLongJump always returned the value
Value - but it is not supposed to ever return 0. Add the test to prevent
that, and return 1 if Value is 0 - as is already present in AArch64.
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Reviewed-by: Sami Mujawar <sami.mujawar@arm.com>
Both in SetJump and in InternalLongJump, 32-bit w register views were
used for the UINTN return value. In SetJump, this did not cause errors;
it was only counterintuitive. But in InternalLongJump, it meant the top
32 bits of Value were stripped off.
Change all of these to use the 64-bit x register views.
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reanimated-by: Andrei Warkentin <andrei.warkentin@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Reviewed-by: Sami Mujawar <sami.mujawar@arm.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
There may be architectures on which there are benefits to
eor r0, r0(, r0)
but ARM was never one of them. Change to more readable
mov r0, #0
instead.
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Reviewed-by: Sami Mujawar <sami.mujawar@arm.com>
The SetJump comment header states that:
If JumpBuffer is NULL, then ASSERT().
However, this was not currently done.
Add a call to InternalAssertJumpBuffer.
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Reviewed-by: Sami Mujawar <sami.mujawar@arm.com>
Drop redundant comment about IPF (clearly copied across from now deleted
code).
Also change
"Instead is resumes execution" ->
"Instead it resumes execution"
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Reviewed-by: Sami Mujawar <sami.mujawar@arm.com>
InternalLongJump was not returning the 2nd parameter passed
to LongJmp (Value) as the return value from SetJmp.
Seen with code compiled with -Os, where an LongJmp (Buffer, -1)
somehow translated to SetJmp returning 0...
Cc: Yong Li <yong.li@intel.com>
Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Tuan Phan <tphan@ventanamicro.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Signed-off-by: Andrei Warkentin <andrei.warkentin@intel.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
Add an API to retrieve satp register value.
Signed-off-by: Tuan Phan <tphan@ventanamicro.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
Implement the SpeculationBarrier with implementations consisting of
fence instruction which provides finer-grain memory orderings.
Perform Data Barrier in RiscV: fence rw,rw
Perform Instruction Barrier in RiscV: fence.i; fence r,r
More detail is in Appendix A: RVWMO Explanatory Material in
https://github.com/riscv/riscv-isa-manual
This API is first introduced in the below commits for IA32 and x64
d9f1cac51be83d841fdc
and below the commit for ARM and AArch64 implementation
c0959b4426
This commit is to add the RiscV64 implementation which will be used by
variable service under Variable/RuntimeDxe
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Cc: Evan Chai <evan.chai@intel.com>
Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Tuan Phan <tphan@ventanamicro.com>
Signed-off-by: Yong Li <yong.li@intel.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
Update BaseLib host-based unit test INF file to only list
VALID_ARCHITECTURES of IA32 and X64 to align with all other
host-based unit test INF files. The UnitTestFrameworkPkg only
provides build support of host-based unit tests to OS applications
for IA32 and X64.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Fixes CodeQL alerts for CWE-457:
https://cwe.mitre.org/data/definitions/457.html
Note that this change affects the actual return value from the
following functions. The functions documented that if an integer
overflow occurred, MAX_UINTN would be returned. They were
implemented to actually return an undefined value from the stack.
This change makes the function follow its description. However, this
is technically different than what callers may have previously
expected.
MdePkg/Library/BaseLib/String.c:
- StrDecimalToUintn()
- StrDecimalToUint64()
- StrHexToUintn()
- StrHexToUint64()
- AsciiStrDecimalToUintn()
- AsciiStrDecimalToUint64()
- AsciiStrHexToUintn()
- AsciiStrHexToUint64()
Cc: Erich McMillan <emcmillan@microsoft.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Michael Kubacki <mikuback@linux.microsoft.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Co-authored-by: Erich McMillan <emcmillan@microsoft.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Oliver Smith-Denny <osd@smith-denny.com>
Add the BTI instructions and the associated note to make the AArch64 asm
objects compatible with BTI enforcement.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Oliver Smith-Denny <osd@smith-denny.com>
Currently, the AArch64 implementation of LongJump() avoids using the RET
instruction to perform the jump, even though the target address is held
in the link register X30, as the nature of a long jump implies that the
ordinary return address prediction machinery will not be able to make a
correct prediction.
However, LongJump() is rarely used, and the return stack will be out of
sync in any case, so this optimization has little value in practice, and
given that indirect calls other than function returns require a BTI
landing pad at the call site, this optimization is not compatible with
BTI. So let's just use RET instead.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Oliver Smith-Denny <osd@smith-denny.com>
__FUNCTION__ is a pre-standard extension that gcc and Visual C++ among
others support, while __func__ was standardized in C99.
Since it's more standard, replace __FUNCTION__ with __func__ throughout
MdePkg.
Visual Studio versions before VS 2015 don't support __func__ and so
will fail to compile. A workaround is to define __func__ as
__FUNCTION__ :
#define __func__ __FUNCTION__
Signed-off-by: Rebecca Cran <rebecca@quicinc.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
InternalSwitchStack may be called with a TPL high
enough for a DebugLib implementation to assert.
Other arch implementations don't log either.
Cc: Daniel Schaefer <git@danielschaefer.me>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
Signed-off-by: Andrei Warkentin <andrei.warkentin@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4076
Few of the basic helper functions required for any
RISC-V CPU were added in edk2-platforms. To support
qemu virt, they need to be added in BaseLib.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Acked-by: Abner Chang <abner.chang@amd.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Remove HOST_APPLICATION limitation for UnitTestHostBaseLib, so that
this library can be used as BaseLib by Emulator.
Also, add some missing files
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Ray Ni <ray.ni@intel.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
There was a OOB access in *StrHexTo* functions, when passed strings like
"XDEADBEEF".
OpenCore folks established an ASAN-equipped project to fuzz Ext4Dxe,
which was able to catch these (mostly harmless) issues.
Cc: Vitaly Cheptsov <vit9696@protonmail.com>
Cc: Marvin H?user <mhaeuser@posteo.de>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Pedro Falcato <pedro.falcato@gmail.com>
Acked-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@Intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3871
Add the CRC16-ANSI and CRC32C implementations previously found at
Features/Ext4Pkg/Ext4Dxe/Crc{16,32c}.c to BaseLib.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Pedro Falcato <pedro.falcato@gmail.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
RVCT is obsolete and no longer used.
Remove support for it.
Signed-off-by: Rebecca Cran <quic_rcran@quicinc.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3790
Replace Opcode with the corresponding instructions.
The code changes have been verified with CompareBuild.py tool, which
can be used to compare the results of two different EDK II builds to
determine if they generate the same binaries.
(tool link: https://github.com/mdkinney/edk2/tree/sandbox/CompareBuild)
Signed-off-by: Jason Lou <yun.lou@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3737
Apply uncrustify changes to .c/.h files in the MdePkg package
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3760
Update all use of ', OPTIONAL' to ' OPTIONAL,' for function params.
Cc: Andrew Fish <afish@apple.com>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3688
* Use DEBUG_LINE_NUMBER instead of __LINE__.
* Use DEBUG_EXPRESSION_STRING instead of #Expression.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Michael Kubacki <michael.kubacki@microsoft.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Tested-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3675
Add QuickSort function into BaseLib
Reviewed-by: Ray Ni <ray.ni@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: IanX Kuo <ianx.kuo@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3275
The RMPADJUST instruction will be used by the SEV-SNP guest to modify the
RMP permissions for a guest page. See AMD APM volume 3 for further
details.
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Message-Id: <20210519181949.6574-9-brijesh.singh@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3275
The PVALIDATE instruction validates or rescinds validation of a guest
page RMP entry. Upon completion, a return code is stored in EAX, rFLAGS
bits OF, ZF, AF, PF and SF are set based on this return code. If the
instruction completed succesfully, the rFLAGS bit CF indicates if the
contents of the RMP entry were changed or not.
For more information about the instruction see AMD APM volume 3.
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Message-Id: <20210519181949.6574-8-brijesh.singh@amd.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3325
1. AsmReadMsr64() in X64/GccInlinePriv.c
AsmReadMsr64 can return uninitialized value if FilterBeforeMsrRead
returns False. This causes build error with the CLANG toolchain.
2. AsmWriteMsr64() in X64/GccInlinePriv.c
In the case that FilterBeforeMsrWrite changes Value and returns True,
The original Value, not the changed Value, is written to the MSR.
This behavior is different from the one of AsmWriteMsr64() in
X64/WriteMsr64.c for the MSFT toolchain.
Signed-off-by: Takuto Naito <naitaku@gmail.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
*v2: refine the coding format.
https://bugzilla.tianocore.org/show_bug.cgi?id=3284
This patch is to support XSETBV instruction so as to support
Extended Control Register(XCR) write.
Extended Control Register(XCR) read has already been supported
by below commit to support XGETBV instruction:
9b3ca509ab
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Ni Ray <ray.ni@intel.com>
Cc: Yao Jiewen <jiewen.yao@intel.com>
Signed-off-by: Jiaxin Wu <Jiaxin.wu@intel.com>
Signed-off-by: Zhang Hongbin1 <hongbin1.zhang@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
CpuPause() might allow the CPU to go into a lower power state
state while we spin.
On X86, CpuPause() executes a PAUSE instruction which the Intel
and AMD specs describe as follows:
Intel:
"PAUSE: An additional function of the PAUSE instruction is to reduce
the power consumed by a processor while executing a spin loop. A
processor can execute a spin-wait loop extremely quickly, causing the
processor to consume a lot of power while it waits for the resource it
is spinning on to become available. Inserting a pause instruction in a
spin-wait loop greatly reduces the processor?s power consumption."
AMD:
"PAUSE: Improves the performance of spin loops, by providing a hint to
the processor that the current code is in a spin loop. The processor
may use this to optimize power consumption while in the spin loop.
Architecturally, this instruction behaves like a NOP instruction."
On RISC-V and ARM64, CpuPause() executes a NOP, which is no worse than
the tight loop we have.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>