Commit Graph

55 Commits

Author SHA1 Message Date
Liming Gao a364928195 UefiCpuPkg PiSmmCpuDxeSmm: Update SmiEntry function run the same position
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1191

Before commit e21e355e2c, jmp _SmiHandler
is commented. And below code, ASM_PFX(CpuSmmDebugEntry) is moved into rax,
then call it. But, this code doesn't work in XCODE5 tool chain. Because XCODE5
doesn't generated the absolute address in the EFI image. So, rax stores the
relative address. Once this logic is moved to another place, it will not work.
;   jmp     _SmiHandler                 ; instruction is not needed
...
    mov     rax, ASM_PFX(CpuSmmDebugEntry)
    call    rax

Commit e21e355e2c is to support XCODE5.
One tricky way is selected to fix it. Although SmiEntry logic is copied to
another place and run, but here jmp _SmiHandler is enabled to jmp the original
code place, then call ASM_PFX(CpuSmmDebugEntry) with the relative address.
    mov     rax, strict qword 0         ;   mov     rax, _SmiHandler
_SmiHandlerAbsAddr:
    jmp     rax
...
    call    ASM_PFX(CpuSmmDebugEntry)

Now, BZ 1191 raises the issue that SmiHandler should run in the copied address,
can't run in the common address. So, jmp _SmiHandler is required to be removed,
the code is kept to run in copied address. And, the relative address is
requried to be fixed up to the absolute address. The necessary changes should
not affect the behavior of platforms that already consume PiSmmCpuDxeSmm.
OVMF SMM boot to shell with VS2017, GCC5 and XCODE5 tool chain has been verified.
...
    mov     rax, strict qword 0         ;   call    ASM_PFX(CpuSmmDebugEntry)
CpuSmmDebugEntryAbsAddr:
    call    rax

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
2018-09-25 08:25:41 +08:00
Jian J Wang 09afd9a42a UefiCpuPkg/PiSmmCpuDxeSmm: implement non-stop mode for SMM
Since SMM profile feature has already implemented non-stop mode if #PF
occurred, this patch just makes use of the existing implementation to
accommodate heap guard and NULL pointer detection feature.

Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
2018-08-30 07:22:30 +08:00
Hao Wu 02f7fd158e UefiCpuPkg/PiSmmCpuDxeSmm: [CVE-2017-5715] Stuff RSB before RSM
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1093

Return Stack Buffer (RSB) is used to predict the target of RET
instructions. When the RSB underflows, some processors may fall back to
using branch predictors. This might impact software using the retpoline
mitigation strategy on those processors.

This commit will add RSB stuffing logic before returning from SMM (the RSM
instruction) to avoid interfering with non-SMM usage of the retpoline
technique.

After the stuffing, RSB entries will contain a trap like:

@SpecTrap:
    pause
    lfence
    jmp     @SpecTrap

A more detailed explanation of the purpose of commit is under the
'Branch target injection mitigation' section of the below link:
https://software.intel.com/security-software-guidance/insights/host-firmware-speculative-execution-side-channel-mitigation

Please note that this commit requires further actions (BZ 1091) to remove
the duplicated 'StuffRsb.inc' files and merge them into one under a
UefiCpuPkg package-level directory (such as UefiCpuPkg/Include/).

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1091

Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
2018-08-21 16:10:42 +08:00
Liming Gao 7367cc6c24 UefiCpuPkg: Clean up source files
1. Do not use tab characters
2. No trailing white space in one line
3. All files must end with CRLF

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
2018-06-28 11:19:53 +08:00
Laszlo Ersek d22c995a48 UefiCpuPkg/PiSmmCpuDxeSmm: use mnemonics for FXSAVE(64)/FXRSTOR(64)
NASM introduced FXSAVE / FXRSTOR support in commit 900fa5b26b8f ("NASM
0.98p3-hpa", 2002-04-30), which commit stands for the nasm-0.98p3-hpa
release.

NASM introduced FXSAVE64 / FXRSTOR64 support in commit 3a014348ca15
("insns: add FXSAVE64/FXRSTOR64, drop np prefix", 2010-07-07), which was
part of the "nasm-2.09" release.

Edk2 requires nasm-2.10 or later for use with the GCC toolchain family,
and nasm-2.12.01 or later for use with all other toolchain families.
Replace the binary encoding of the FXSAVE(64)/FXRSTOR(64) instructions
with mnemonics.

I verified that the "Ia32/SmiException.obj", "X64/SmiEntry.obj" and
"X64/SmiException.obj" files are rebuilt after this patch, without any
change in content.

This patch removes the last instructions encoded with DBs from
PiSmmCpuDxeSmm.

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:27 +02:00
Laszlo Ersek 9686a4678d UefiCpuPkg/PiSmmCpuDxeSmm: remove DBs from SmmRelocationSemaphoreComplete32()
(1) SmmRelocationSemaphoreComplete32() runs in 32-bit mode, so wrap it in
    a (BITS 32 ... BITS 64) bracket.

(2) SmmRelocationSemaphoreComplete32() currently compiles to:

> 000002AE  C6050000000001    mov byte [dword 0x0],0x1
> 000002B5  FF2500000000      jmp dword [dword 0x0]

    where the first instruction is patched with the contents of
    "mRebasedFlag" (so that (*mRebasedFlag) is set to 1), and the second
    instruction is patched with the address of
    "mSmmRelocationOriginalAddress" (so that we jump to
    "mSmmRelocationOriginalAddress").

    In its current form the first instruction could not be patched with
    PatchInstructionX86(), given that the operand to patch is not encoded
    in the trailing bytes of the instruction. Therefore, adopt an
    EAX-based version, inspired by both the IA32 and X64 variants of
    SmmRelocationSemaphoreComplete():

> 000002AE  50                push eax
> 000002AF  B800000000        mov eax,0x0
> 000002B4  C60001            mov byte [eax],0x1
> 000002B7  58                pop eax
> 000002B8  FF2500000000      jmp dword [dword 0x0]

    Here both instructions can be patched with PatchInstructionX86(), and
    the DBs can be replaced with native NASM syntax.

(3) Turn the "mRebasedFlagAddr32" and "mSmmRelocationOriginalAddressPtr32"
    variables into markers that suit PatchInstructionX86().

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:25 +02:00
Laszlo Ersek 5830d2c399 UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmInitStack" with PatchInstructionX86()
Rename the variable to "gPatchSmmInitStack" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".

The size of the patched source operand is (sizeof (UINTN)).

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:23 +02:00
Laszlo Ersek 456c4ccab2 UefiCpuPkg/PiSmmCpuDxeSmm: eliminate "gSmmJmpAddr" and related DBs
The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its
PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can
simply use the NASM syntax for the following Mixed-Size Jump:

> jmp PROTECT_MODE_CS : dword @32bit

The generated object code for the instruction is unchanged:

> 00000182  66EA5A0000000800  jmp dword 0x8:0x5a

(The NASM manual explains that putting the DWORD prefix after the colon
":" reflects the intent better, since it is the offset that is a DWORD.
Thus, that's what I used. However, both syntaxes are interchangeable,
hence the ndisasm output.)

The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr";
however that's accidental, not inherent:

- Bring LONG_MODE_CODE_SEGMENT from
  "UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as
  LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32
  version as PROTECT_MODE_CS earlier.

- Apply the NASM-native Mixed-Size Jump syntax again, but jump to the
  fixed zero offset in LONG_MODE_CS. This will produce no relocation
  record at all. Add a label after the instruction.

- Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards
  from the label. Because we modify the DWORD offset with a DWORD access,
  the segment selector is unharmed in the instruction, and we need not set
  it from PiCpuSmmEntry().

According to "objdump --reloc", the X64 version undergoes only the
following relocations, after this patch:

> RELOCATION RECORDS FOR [.text]:
> OFFSET           TYPE              VALUE
> 0000000000000095 R_X86_64_PC32     SmmInitHandler-0x0000000000000004
> 00000000000000e0 R_X86_64_PC32     mRebasedFlag-0x0000000000000004
> 00000000000000ea R_X86_64_PC32     mSmmRelocationOriginalAddress-0x0000000000000004

Therefore the patch does not regress
<https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool
chain for UefiCpuPkg with nasm source code").

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:19 +02:00
Laszlo Ersek f0053e837a UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr0" with PatchInstructionX86()
Like "gSmmCr4" in the previous patch, "gSmmCr0" is not only used for
machine code patching, but also as a means to communicate the initial CR0
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr0Value" instruction's binary
representation are utilized as normal data too.

In order to get rid of the DB for "mov eax, Cr0Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr0" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr0" variable to
"gPatchSmmCr0" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr0".

This lets us remove the binary (DB) encoding of "mov eax, Cr0Value" in
"SmmInit.nasm".

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:18 +02:00
Laszlo Ersek 351b49c1a7 UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr4" with PatchInstructionX86()
Unlike "gSmmCr3" in the previous patch, "gSmmCr4" is not only used for
machine code patching, but also as a means to communicate the initial CR4
value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words,
the last four bytes of the "mov eax, Cr4Value" instruction's binary
representation are utilized as normal data too.

In order to get rid of the DB for "mov eax, Cr4Value", we have to split
both roles, patching and data flow. Introduce the "mSmmCr4" global (SMRAM)
variable for the data flow purpose. Rename the "gSmmCr4" variable to
"gPatchSmmCr4" so that its association with PatchInstructionX86() is clear
from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and
patch it with PatchInstructionX86(), to the value now contained in
"mSmmCr4".

This lets us remove the binary (DB) encoding of "mov eax, Cr4Value" in
"SmmInit.nasm".

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:16 +02:00
Laszlo Ersek 6b0841c166 UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr3" with PatchInstructionX86()
Rename the variable to "gPatchSmmCr3" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmmInit.nasm".

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:14 +02:00
Laszlo Ersek 00c5eede48 UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from X64 SmmStartup()
(This patch is the 64-bit variant of commit e75ee97224,
"UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()",
2018-01-31.)

The SmmStartup() function executes in SMM, which is very similar to real
mode. Add "BITS 16" before it and "BITS 64" after it (just before the
@LongMode label).

Remove the manual 0x66 operand-size override prefixes, for selecting
32-bit operands -- the sizes of our operands trigger NASM to insert the
prefixes automatically in almost every spot. The one place where we have
to add it back manually is the LGDT instruction. In the LGDT instruction
we also replace the binary 0x2E prefix with the normal NASM syntax for CS
segment override.

The stores to the Control Registers were always 32-bit wide; the source
code only used RAX as source operand because it generated the expected
object code (with NASM compiling the source as if in BITS 64). With BITS
16 added, we can use the actual register width in the source operands
(EAX).

This patch causes NASM to generate byte-identical object code (determined
by disassembling both the pre-patch and post-patch versions, and comparing
the listings), except:

> @@ -231,7 +231,7 @@
>  000001D2  6689D3            mov ebx,edx
>  000001D5  66B800000000      mov eax,0x0
>  000001DB  0F22D8            mov cr3,eax
> -000001DE  662E670F0155F6    o32 lgdt [cs:ebp-0xa]
> +000001DE  2E66670F0155F6    o32 lgdt [cs:ebp-0xa]
>  000001E5  66B800000000      mov eax,0x0
>  000001EB  80CC02            or ah,0x2
>  000001EE  0F22E0            mov cr4,eax

The only difference is the prefix list order, it changes from:

- 0x66, 0x2E, 0x67

to

- 0x2E, 0x66, 0x67

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:11 +02:00
Laszlo Ersek 3c5ce64f23 UefiCpuPkg/PiSmmCpuDxeSmm: patch "XdSupported" with PatchInstructionX86()
"mXdSupported" is a global BOOLEAN variable, initialized to TRUE. The
CheckFeatureSupported() function is executed on all processors (not
concurrently though), called from SmmInitHandler(). If XD support is found
to be missing on any CPU, then "mXdSupported" is set to FALSE, and further
processors omit the check. Afterwards, "mXdSupported" is read by several
assembly and C code locations.

The tricky part is *where* "mXdSupported" is allocated (defined):

- Before commit 717fb60443 ("UefiCpuPkg/PiSmmCpuDxeSmm: Add paging
  protection.", 2016-11-17), it used to be a normal global variable,
  defined (allocated) in "SmmProfile.c".

- With said commit, we moved the definition (allocation) of "mXdSupported"
  into "SmiEntry.nasm". The variable was defined over the last byte of a
  "mov al, 1" instruction, so that setting it to FALSE in
  CheckFeatureSupported() would patch the instruction to "mov al, 0". The
  subsequent conditional jump would change behavior, plus all further read
  references to "mXdSupported" (in C and assembly code) would read back
  the source (imm8) operand of the patched MOV instruction as data.

  This trick required that the MOV instruction be encoded with DB.

In order to get rid of the DB, we have to split both roles: we need a
label for the code patching, and "mXdSupported" has to be defined
(allocated) independently of the code patching. Of course, their values
must always remain in sync.

(1) Reinstate the "mXdSupported" definition and initialization in
    "SmmProfile.c" from before commit 717fb60443. Change the assembly
    language definition ("global") to a declaration ("extern").

(2) Define the "gPatchXdSupported" label (type X86_ASSEMBLY_PATCH_LABEL)
    in "SmiEntry.nasm", and add the C-language declaration to
    "SmmProfileInternal.h". Replace the DB with the MOV mnemonic (keeping
    the imm8 source operand with value 1).

(3) In CheckFeatureSupported(), whenever "mXdSupported" is set to FALSE,
    patch the assembly code in sync, with PatchInstructionX86().

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:08 +02:00
Laszlo Ersek c455687fd0 UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmiCr3" with PatchInstructionX86()
Rename the variable to "gPatchSmiCr3" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmiEntry.nasm".

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:06 +02:00
Laszlo Ersek fc504fdea7 UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmiStack" with PatchInstructionX86()
Rename the variable to "gPatchSmiStack" so that its association with
PatchInstructionX86() is clear from the declaration. Also change its type
to X86_ASSEMBLY_PATCH_LABEL.

Unlike "gSmbase" in the previous patch, "gSmiStack"'s patched value is
also de-referenced by C code (in other words, it is read back after
patching): the InstallSmiHandler() function stores "CpuIndex" to the given
CPU's SMI stack through "gSmiStack". Introduce the local variable
"CpuSmiStack" in InstallSmiHandler() for calculating the stack location
separately, then use this variable for both patching into the assembly
code, and for storing "CpuIndex" through it.

It's assumed that "volatile" stood in the declaration of "gSmiStack"
because we used to read "gSmiStack" back for de-referencing; with that use
gone, we can remove "volatile" too. (Note that the *target* of the pointer
was never volatile-qualified.)

Finally, replace the binary (DB) encoding of "mov esp, imm32" in
"SmiEntry.nasm".

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:04 +02:00
Laszlo Ersek 5a1bfda4bd UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmbase" with PatchInstructionX86()
Rename the variable to "gPatchSmbase" so that its association with
PatchInstructionX86() is clear from the declaration, change its type to
X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This
lets us remove the binary (DB) encoding of some instructions in
"SmiEntry.nasm".

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:02 +02:00
Laszlo Ersek 38a5df04ef UefiCpuPkg/PiSmmCpuDxeSmm: remove *.S and *.asm assembly files
All edk2 toolchains use NASM for compiling X86 assembly source code. We
plan to remove X86 *.S and *.asm files globally, in order to reduce
maintenance and confusion:

http://mid.mail-archive.com/4A89E2EF3DFEDB4C8BFDE51014F606A14E1B9F76@SHSMSX104.ccr.corp.intel.com
https://lists.01.org/pipermail/edk2-devel/2018-March/022690.html
https://bugzilla.tianocore.org/show_bug.cgi?id=881

Let's start with UefiCpuPkg/PiSmmCpuDxeSmm: remove the *.S and *.asm
dialects (both Ia32 and X64) of the SmmInit, SmiEntry, SmiException and
MpFuncs sources.

Cc: Eric Dong <eric.dong@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Andrew Fish <afish@apple.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-04-04 16:44:00 +02:00
Jian J Wang d4d87596c1 UefiCpuPkg/PiSmmCpuDxeSmm: Enable NXE if it's supported
If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory
of EfiBootServicesCode, EfiConventionalMemory, the BIOS will hang at a page
fault exception triggered by PiSmmCpuDxeSmm.

The root cause is that PiSmmCpuDxeSmm will access default SMM RAM starting
at 0x30000 which is marked as non-executable, but NX feature was not
enabled during SMM initialization. Accessing memory which has invalid
attributes set will cause page fault exception. This patch fixes it by
checking NX capability in cpuid and enable NXE in EFER MSR if it's
available.

Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
2018-01-18 17:03:24 +08:00
Liming Gao e21e355e2c UefiCpuPkg: Update PiSmmCpuDxeSmm pass XCODE5 tool chain
https://bugzilla.tianocore.org/show_bug.cgi?id=849

In V2, use "mov rax, strict qword 0" to replace the hard code db.

1. Use lea instruction to get the address instead of mov instruction.
2. Use the dummy address as jmp destination, and add the logic to fix up
the address to the absolute address at boot time.
3. On MpFuncs.nasm, use ExchangeInfo to record InitializeFloatingPointUnits.
This way is same to MpInitLib.

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
2018-01-16 23:43:08 +08:00
Star Zeng 6e601a4109 UefiCpuPkg PiSmmCpuDxeSmm: Fixed #double fault on #page fault for IA32
When StackGuard is enabled on IA32, the #double fault exception
is reported instead of #page fault.

This issue does not exist on X64, or IA32 without StackGuard.

The fix at e4435f710c was incomplete.

It is because AllocateCodePages() is used to allocate buffer for
GDT and TSS, the code pages will be set to RO in SetMemMapAttributes().
But IA32 Stack Guard need use task switch to switch stack that need
write GDT and TSS, so AllocateCodePages() could not be used.

This patch uses AllocatePages() instead of AllocateCodePages() to
allocate buffer for GDT and TSS if StackGuard is enabled on IA32.

Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
2018-01-15 10:41:15 +08:00
Liming Gao 8764ed57b4 UefiCpuPkg: PiSmmCpuDxeSmm Add the missing ASM_PFX in nasm code
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
2017-12-08 13:29:42 +08:00
Star Zeng 1015fb3c1b UefiCpuPkg PiSmmCpuDxeSmm: SMM profile and static paging mutual exclusion
SMM profile and static paging could not be enabled at the same time,
this patch is to add check and comments to make sure it.

Similar comments are also added for the case of static paging and
heap guard for SMM.

Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
2017-12-08 12:29:24 +08:00
Star Zeng 8bf0380e5e UefiCpuPkg PiSmmCpuDxeSmm: Only DumpCpuContext in error case
Only DumpCpuContext in error case, otherwise there will be too many
debug messages from DumpCpuContext() when SmmProfile feature is enabled
by setting PcdCpuSmmProfileEnable to TRUE. Those debug messages are not
needed for SmmProfile feature as it will record those information to
buffer for further dump.

Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
2017-12-08 09:12:32 +08:00
Jian J Wang 827330ccd1 UefiCpuPkg: Fix unix style of EOL
Cc: Wu Hao <hao.a.wu@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Hao Wu <hao.a.wu@intel.com>
2017-11-21 20:24:37 +08:00
Jian J Wang af4f4b3468 UefiCpuPkg/PiSmmCpuDxeSmm: Add SmmMemoryAttribute protocol
Heap guard makes use of paging mechanism to implement its functionality. But
there's no protocol or library available to change page attribute in SMM mode.
A new protocol gEdkiiSmmMemoryAttributeProtocolGuid is introduced to make it
happen. This protocol provide three interfaces

struct _EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL {
  EDKII_SMM_GET_MEMORY_ATTRIBUTES       GetMemoryAttributes;
  EDKII_SMM_SET_MEMORY_ATTRIBUTES       SetMemoryAttributes;
  EDKII_SMM_CLEAR_MEMORY_ATTRIBUTES     ClearMemoryAttributes;
};

Since heap guard feature need to update page attributes. The page table
should not set to be read-only if heap guard feature is enabled for SMM
mode. Otherwise this feature cannot work.

Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
2017-11-17 11:03:18 +08:00
Jian J Wang f8c1133bbb UefiCpuPkg/PiSmmCpuDxeSmm: Implement NULL pointer detection for SMM code
The mechanism behind is the same as NULL pointer detection enabled in EDK-II
core. SMM has its own page table and we have to disable page 0 again in SMM
mode.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Eric Dong <eric.dong@intel.com>
2017-10-11 16:39:01 +08:00
Star Zeng 51ce27fd8c UefiCpuPkg/PiSmmCpuDxeSmm: Centralize mPhysicalAddressBits definition
Originally (before 714c260301),
mPhysicalAddressBits was only defined in X64 PageTbl.c, after
714c260301, mPhysicalAddressBits is
also defined in Ia32 PageTbl.c, then mPhysicalAddressBits is used in
ConvertMemoryPageAttributes() for address check.

This patch is to centralize mPhysicalAddressBits definition to
PiSmmCpuDxeSmm.c from Ia32 and X64 PageTbl.c.

Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Star Zeng <star.zeng@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
2017-08-28 17:19:53 +08:00
Jeff Fan b8caae191c UefiCpuPkg/PiSmmCpuDxeSmm: Consume new APIs
Consuming PeCoffSerachImageBase() from PeCoffGetEntrypointLib and consuming
DumpCpuContext() from CpuExceptionHandlerLib to replace its own implementation.

Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
2017-04-07 09:43:48 +08:00
Leo Duran 241f914975 UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
This PCD holds the address mask for page table entries when memory
encryption is enabled on AMD processors supporting the Secure Encrypted
Virtualization (SEV) feature.

The mask is applied when page tables entriees are created or modified.

CC: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2017-03-01 12:53:03 +08:00
Jiewen Yao d2fc771113 UefiCpuPkg/PiSmmCpu: Add SMM Comm Buffer Paging Protection.
This patch sets the normal OS buffer EfiLoaderCode/Data,
EfiBootServicesCode/Data, EfiConventionalMemory, EfiACPIReclaimMemory
to be not present after SmmReadyToLock.

To access these region in OS runtime phase is not a good solution.

Previously, we did similar check in SmmMemLib to help SMI handler
do the check. But if SMI handler forgets the check, it can still
access these OS region and bring risk.

So here we enforce the policy to prevent it happening.

Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-12-19 09:37:37 +08:00
Feng Tian b6fea56cb5 UefiCpuPkg/PiSmmCpuDxeSmm: Fix .S & .asm build failure
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-12-16 08:27:59 +08:00
Michael Kinney 854c6b80dc UefiCpuPkg/PiSmmCpuDxeSmm: Remove MTRR field from PSD
https://bugzilla.tianocore.org/show_bug.cgi?id=277

The MTRR field was removed from PROCESS_SMM_DESCRIPTOR
structure in commit:

26ab5ac362

However, the references to the MTRR field in assembly
files were not removed.  Remove the extern reference
to gSmiMtrr and set the Reserved14 field
of PROCESS_SMM_DESCRIPTOR to 0.

Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-12-06 23:34:16 -08:00
Jiewen Yao e4435f710c UefiCpuPkg/PiSmmCpu: Fixed #double fault on #page fault.
This patch fixes https://bugzilla.tianocore.org/show_bug.cgi?id=246

Previously, when SMM exception happens after EndOfDxe,
with StackGuard enabled on IA32, the #double fault exception
is reported instead of #page fault.

Root cause is below:

Current EDKII SMM page protection will lock GDT.
If IA32 stack guard is enabled, the page fault handler will do task switch.
This task switch need write busy flag in GDT, and write TSS.

However, the GDT and TSS is locked at that time, so the
double fault happens.

We decide to not lock GDT for IA32 StackGuard enabled.

This issue does not exist on X64, or IA32 without StackGuard.

Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
2016-12-07 13:13:55 +08:00
Jiewen Yao 7fa1376c5c UefiCpuPkg/PiSmmCpu: Correct exception message.
This patch fixes the first part of
https://bugzilla.tianocore.org/show_bug.cgi?id=242

Previously, when SMM exception happens, "stack overflow" is misreported.
This patch checked the PF address to see it is stack overflow, or
it is caused by SMM page protection.

It dumps exception data, PF address and the module trigger the issue.

Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-11-24 10:51:16 +08:00
Jeff Fan 5c88af795d MdeModulePkg/PiSmmCpuDxeSmm: Check RegisterCpuInterruptHandler status
Once platform selects the incorrect instance, the caller could know it from
return status and ASSERT().

Cc: Feng Tian <feng.tian@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
2016-11-18 09:43:52 +08:00
Michael Kinney 672b80c8b7 UefiCpuPkg/PiSmmCpuDxeSmm: TransferApToSafeState() use UINTN params
Update TransferApToSafeState() use UINTN params to reduce the
number of type casts required in these calls.  Also change
the NumberToFinish parameter from UINT32* to UINTN
NumberToFinishAddress to resolve issues with conversion from
a volatile pointer to a non-volatile pointer.  The assembly
code that receives the NumberToFinishAddress value must treat
that memory location as a volatile to track the number of APs.

Cc: Liming Gao <liming.gao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Andrew Fish <afish@apple.com>
Cc: Jeff Fan <jeff.fan@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
2016-11-17 17:37:50 -08:00
Jiewen Yao 717fb60443 UefiCpuPkg/PiSmmCpuDxeSmm: Add paging protection.
PiSmmCpuDxeSmm consumes SmmAttributesTable and setup page table:
1) Code region is marked as read-only and Data region is non-executable,
if the PE image is 4K aligned.
2) Important data structure is set to RO, such as GDT/IDT.
3) SmmSaveState is set to non-executable,
and SmmEntrypoint is set to read-only.
4) If static page is supported, page table is read-only.

We use page table to protect other components, and itself.

If we use dynamic paging, we can still provide *partial* protection.
And hope page table is not modified by other components.

The XD enabling code is moved to SmiEntry to let NX take effect.

Cc: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
2016-11-17 16:30:07 +08:00
Jeff Fan ec8a387700 UefiCpuPkg/PiSmmCpuDxeSmm: Decrease mNumberToFinish in AP safe code
We will put APs into hlt-loop in safe code. But we decrease mNumberToFinish
before APs enter into the safe code. Paolo pointed out this gap.

This patch is to move mNumberToFinish decreasing to the safe code. It could
make sure BSP could wait for all APs are running in safe code.

https://bugzilla.tianocore.org/show_bug.cgi?id=216

Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
2016-11-15 09:47:32 +08:00
Jeff Fan 45e3440ac2 UefiCpuPkg/PiSmmCpuDxeSmm: Place AP to 32bit protected mode on S3 path
On S3 path, we may transfer to long mode (if DXE is long mode) to restore CPU
contexts with CR3 = SmmS3Cr3 (in SMM). AP will execute hlt-loop after CPU
contexts restoration. Once one NMI or SMI happens, APs may exit from hlt state
and execute the instruction after HLT instruction. If APs are running on long
mode, page table is required to fetch the instruction. However, CR3 pointer to
page table in SMM. APs will crash.

This fix is to disable long mode on APs and transfer to 32bit protected mode to
execute hlt-loop. Then CR3 and page table will no longer be required.

https://bugzilla.tianocore.org/show_bug.cgi?id=216

Reported-by: Laszlo Ersek <lersek@redhat.com>
Analyzed-by: Paolo Bonzini <pbonzini@redhat.com>
Analyzed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
2016-11-15 09:47:32 +08:00
Jeff Fan 4a0f88dd64 UefiCpuPkg/PiSmmCpuDxeSmm: Put AP into safe hlt-loop code on S3 path
On S3 path, we will wake up APs to restore CPU context in PiSmmCpuDxeSmm
driver. However, we place AP in hlt-loop under 1MB space borrowed after CPU
restoring CPU contexts.
In case, one NMI or SMI happens, APs may exit from hlt state and execute the
instruction after HLT instruction. But the code under 1MB is no longer safe at
that time.

This fix is to allocate one ACPI NVS range to place the AP hlt-loop code. When
CPU finished restoration CPU contexts, AP will execute in this ACPI NVS range.

https://bugzilla.tianocore.org/show_bug.cgi?id=216

v2:
  1. Make stack alignment per Laszlo's comment.
  2. Trim whitespace at end of end.
  3. Update year mark in file header.

Reported-by: Laszlo Ersek <lersek@redhat.com>
Analyzed-by: Paolo Bonzini <pbonzini@redhat.com>
Analyzed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
2016-11-15 09:44:53 +08:00
Laszlo Ersek ef3e20e3ca UefiCpuPkg: fix ASSERT_EFI_ERROR() typos
A number of code locations use

  ASSERT_EFI_ERROR (BooleanExpression)

instead of

  ASSERT (BooleanExpression)

Fix them.

Cc: Jeff Fan <jeff.fan@intel.com>
Reported-by: Gerd Hoffmann <kraxel@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Giri P Mudusuru <giri.p.mudusuru@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
2016-06-30 17:27:38 +02:00
Liming Gao ba15b97142 UefiCpuPkg PiSmmCpuDxeSmm: Convert X64/SmmInit.asm to NASM
Manually convert X64/SmmInit.asm to X64/SmmInit.nasm

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
2016-06-28 09:52:18 +08:00
Liming Gao 9f54832f4b UefiCpuPkg PiSmmCpuDxeSmm: Convert X64/SmiException.asm to NASM
Manually convert X64/SmiException.asm to X64/SmiException.nasm

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
2016-06-28 09:52:18 +08:00
Liming Gao 9a36d4dc3f UefiCpuPkg PiSmmCpuDxeSmm: Convert X64/SmiEntry.asm to NASM
Manually convert X64/SmiEntry.asm to X64/SmiEntry.nasm

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
2016-06-28 09:52:17 +08:00
Liming Gao e1f0eed1b2 UefiCpuPkg PiSmmCpuDxeSmm: Update X64/MpFuncs.nasm
Use 16bit and 32bit assembly code to replace hard code db.

In V2: add 0x67 prefixes to far jumps

Without the a32 modifier under FLAT32_JUMP, and the a16 modifier under
LONG_JUMP, nasm doesn't generate the 0x67 prefixes, and the far jumps
don't work. (For the former, KVM returns an emulation failure. For the
latter, KVM performs a triple fault (guest reboot).) By forcing the 0x67
prefixes we end up with the same machine code as the one open-coded in
"MpFuncs.asm".

This bug breaks S3 resume in the Ia32X64 + SMM_REQUIRE build of OVMF.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
2016-06-28 09:52:16 +08:00
Liming Gao 78cf66eebb UefiCpuPkg PiSmmCpuDxeSmm: Convert X64/MpFuncs.asm to NASM
The BaseTools/Scripts/ConvertMasmToNasm.py script was used to convert
X64/MpFuncs.asm to X64/MpFuncs.nasm
And, manually update it to pass build.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Liming Gao <liming.gao@intel.com>
2016-06-28 09:52:16 +08:00
Jeff Fan fe3a75bc41 UefiCpuPkg/PiSmmCpuDxeSmm: Using global semaphores in aligned buffer
Update all global semaphores to the ones in allocated aligned
semaphores buffer.

Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Jeff Fan <jeff.fan@intel.com>
Reviewed-by: Feng Tian <feng.tian@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
2016-05-24 15:20:01 -07:00
Yao, Jiewen 53ba3fb8aa UefiCpuPkg/PiSmmCpu: Always set WP in CR0
So that we can use write-protection for code later.

It is REPOST.
It includes suggestion from Michael Kinney <michael.d.kinney@intel.com>:
- "For IA32 assembly, can we combine into a single OR instruction that
  sets both page enable and WP?"
- "For X64, does it make sense to use single OR instruction instead of 2
  BTS instructions as well?"

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Suggested-by: Michael Kinney <michael.d.kinney@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Cc: "Fan, Jeff" <jeff.fan@intel.com>
Cc: "Kinney, Michael D" <michael.d.kinney@intel.com>
Cc: "Laszlo Ersek" <lersek@redhat.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>

git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19068 6f19259b-4bc3-4df7-8a09-765794883524
2015-11-30 19:57:45 +00:00
Yao, Jiewen 881520ea67 UefiCpuPkg/PiSmmCpu: Always set RW+P bit for page table by default
So that we can use write-protection for code later.

This is REPOST.
It includes the bug fix from "Paolo Bonzini" <pbonzini@redhat.com>:

  Title: fix generation of 32-bit PAE page tables

  "Bits 1 and 2 are reserved in 32-bit PAE Page Directory Pointer Table
  Entries (PDPTEs); see Table 4-8 in the SDM.  With VMX extended page
  tables, the processor notices and fails the VM entry as soon as CR0.PG
  is set to 1."

And thanks "Laszlo Ersek" <lersek@redhat.com> to validate the fix.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com>
Signed-off-by: "Paolo Bonzini" <pbonzini@redhat.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Cc: "Fan, Jeff" <jeff.fan@intel.com>
Cc: "Kinney, Michael D" <michael.d.kinney@intel.com>
Cc: "Laszlo Ersek" <lersek@redhat.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>

git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19067 6f19259b-4bc3-4df7-8a09-765794883524
2015-11-30 19:57:40 +00:00
Laszlo Ersek fc8c919525 Revert "Always set WP in CR0."
This reverts SVN r18960 / git commit
8e496a7abc.

The patch series had been fully reviewed on edk2-devel, but it got
committed as a single squashed patch. Revert it for now.

Link: http://thread.gmane.org/gmane.comp.bios.edk2.devel/4951
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Laszlo Ersek <lersek@redhat.com>

git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18977 6f19259b-4bc3-4df7-8a09-765794883524
2015-11-27 12:00:26 +00:00