audk/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmInit.nasm

153 lines
4.6 KiB
NASM
Raw Normal View History

;------------------------------------------------------------------------------ ;
; Copyright (c) 2016 - 2018, Intel Corporation. All rights reserved.<BR>
; This program and the accompanying materials
; are licensed and made available under the terms and conditions of the BSD License
; which accompanies this distribution. The full text of the license may be found at
; http://opensource.org/licenses/bsd-license.php.
;
; THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
; WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
;
; Module Name:
;
; SmmInit.nasm
;
; Abstract:
;
; Functions for relocating SMBASE's for all processors
;
;-------------------------------------------------------------------------------
%include "StuffRsb.inc"
extern ASM_PFX(SmmInitHandler)
extern ASM_PFX(mRebasedFlag)
extern ASM_PFX(mSmmRelocationOriginalAddress)
global ASM_PFX(gPatchSmmCr3)
global ASM_PFX(gPatchSmmCr4)
global ASM_PFX(gPatchSmmCr0)
global ASM_PFX(gPatchSmmInitStack)
global ASM_PFX(gcSmiInitGdtr)
global ASM_PFX(gcSmmInitSize)
global ASM_PFX(gcSmmInitTemplate)
UefiCpuPkg/PiSmmCpuDxeSmm: remove DBs from SmmRelocationSemaphoreComplete32() (1) SmmRelocationSemaphoreComplete32() runs in 32-bit mode, so wrap it in a (BITS 32 ... BITS 64) bracket. (2) SmmRelocationSemaphoreComplete32() currently compiles to: > 000002AE C6050000000001 mov byte [dword 0x0],0x1 > 000002B5 FF2500000000 jmp dword [dword 0x0] where the first instruction is patched with the contents of "mRebasedFlag" (so that (*mRebasedFlag) is set to 1), and the second instruction is patched with the address of "mSmmRelocationOriginalAddress" (so that we jump to "mSmmRelocationOriginalAddress"). In its current form the first instruction could not be patched with PatchInstructionX86(), given that the operand to patch is not encoded in the trailing bytes of the instruction. Therefore, adopt an EAX-based version, inspired by both the IA32 and X64 variants of SmmRelocationSemaphoreComplete(): > 000002AE 50 push eax > 000002AF B800000000 mov eax,0x0 > 000002B4 C60001 mov byte [eax],0x1 > 000002B7 58 pop eax > 000002B8 FF2500000000 jmp dword [dword 0x0] Here both instructions can be patched with PatchInstructionX86(), and the DBs can be replaced with native NASM syntax. (3) Turn the "mRebasedFlagAddr32" and "mSmmRelocationOriginalAddressPtr32" variables into markers that suit PatchInstructionX86(). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 05:36:22 +01:00
global ASM_PFX(gPatchRebasedFlagAddr32)
global ASM_PFX(gPatchSmmRelocationOriginalAddressPtr32)
UefiCpuPkg/PiSmmCpuDxeSmm: eliminate "gSmmJmpAddr" and related DBs The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can simply use the NASM syntax for the following Mixed-Size Jump: > jmp PROTECT_MODE_CS : dword @32bit The generated object code for the instruction is unchanged: > 00000182 66EA5A0000000800 jmp dword 0x8:0x5a (The NASM manual explains that putting the DWORD prefix after the colon ":" reflects the intent better, since it is the offset that is a DWORD. Thus, that's what I used. However, both syntaxes are interchangeable, hence the ndisasm output.) The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr"; however that's accidental, not inherent: - Bring LONG_MODE_CODE_SEGMENT from "UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32 version as PROTECT_MODE_CS earlier. - Apply the NASM-native Mixed-Size Jump syntax again, but jump to the fixed zero offset in LONG_MODE_CS. This will produce no relocation record at all. Add a label after the instruction. - Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards from the label. Because we modify the DWORD offset with a DWORD access, the segment selector is unharmed in the instruction, and we need not set it from PiCpuSmmEntry(). According to "objdump --reloc", the X64 version undergoes only the following relocations, after this patch: > RELOCATION RECORDS FOR [.text]: > OFFSET TYPE VALUE > 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004 > 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004 > 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004 Therefore the patch does not regress <https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool chain for UefiCpuPkg with nasm source code"). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 04:12:51 +01:00
%define LONG_MODE_CS 0x38
DEFAULT REL
SECTION .text
ASM_PFX(gcSmiInitGdtr):
DW 0
DQ 0
global ASM_PFX(SmmStartup)
UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from X64 SmmStartup() (This patch is the 64-bit variant of commit e75ee97224e5, "UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()", 2018-01-31.) The SmmStartup() function executes in SMM, which is very similar to real mode. Add "BITS 16" before it and "BITS 64" after it (just before the @LongMode label). Remove the manual 0x66 operand-size override prefixes, for selecting 32-bit operands -- the sizes of our operands trigger NASM to insert the prefixes automatically in almost every spot. The one place where we have to add it back manually is the LGDT instruction. In the LGDT instruction we also replace the binary 0x2E prefix with the normal NASM syntax for CS segment override. The stores to the Control Registers were always 32-bit wide; the source code only used RAX as source operand because it generated the expected object code (with NASM compiling the source as if in BITS 64). With BITS 16 added, we can use the actual register width in the source operands (EAX). This patch causes NASM to generate byte-identical object code (determined by disassembling both the pre-patch and post-patch versions, and comparing the listings), except: > @@ -231,7 +231,7 @@ > 000001D2 6689D3 mov ebx,edx > 000001D5 66B800000000 mov eax,0x0 > 000001DB 0F22D8 mov cr3,eax > -000001DE 662E670F0155F6 o32 lgdt [cs:ebp-0xa] > +000001DE 2E66670F0155F6 o32 lgdt [cs:ebp-0xa] > 000001E5 66B800000000 mov eax,0x0 > 000001EB 80CC02 or ah,0x2 > 000001EE 0F22E0 mov cr4,eax The only difference is the prefix list order, it changes from: - 0x66, 0x2E, 0x67 to - 0x2E, 0x66, 0x67 Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 01:23:17 +01:00
BITS 16
ASM_PFX(SmmStartup):
mov eax, 0x80000001 ; read capability
cpuid
mov ebx, edx ; rdmsr will change edx. keep it in ebx.
mov eax, strict dword 0 ; source operand will be patched
ASM_PFX(gPatchSmmCr3):
UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from X64 SmmStartup() (This patch is the 64-bit variant of commit e75ee97224e5, "UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()", 2018-01-31.) The SmmStartup() function executes in SMM, which is very similar to real mode. Add "BITS 16" before it and "BITS 64" after it (just before the @LongMode label). Remove the manual 0x66 operand-size override prefixes, for selecting 32-bit operands -- the sizes of our operands trigger NASM to insert the prefixes automatically in almost every spot. The one place where we have to add it back manually is the LGDT instruction. In the LGDT instruction we also replace the binary 0x2E prefix with the normal NASM syntax for CS segment override. The stores to the Control Registers were always 32-bit wide; the source code only used RAX as source operand because it generated the expected object code (with NASM compiling the source as if in BITS 64). With BITS 16 added, we can use the actual register width in the source operands (EAX). This patch causes NASM to generate byte-identical object code (determined by disassembling both the pre-patch and post-patch versions, and comparing the listings), except: > @@ -231,7 +231,7 @@ > 000001D2 6689D3 mov ebx,edx > 000001D5 66B800000000 mov eax,0x0 > 000001DB 0F22D8 mov cr3,eax > -000001DE 662E670F0155F6 o32 lgdt [cs:ebp-0xa] > +000001DE 2E66670F0155F6 o32 lgdt [cs:ebp-0xa] > 000001E5 66B800000000 mov eax,0x0 > 000001EB 80CC02 or ah,0x2 > 000001EE 0F22E0 mov cr4,eax The only difference is the prefix list order, it changes from: - 0x66, 0x2E, 0x67 to - 0x2E, 0x66, 0x67 Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 01:23:17 +01:00
mov cr3, eax
o32 lgdt [cs:ebp + (ASM_PFX(gcSmiInitGdtr) - ASM_PFX(SmmStartup))]
mov eax, strict dword 0 ; source operand will be patched
ASM_PFX(gPatchSmmCr4):
or ah, 2 ; enable XMM registers access
UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from X64 SmmStartup() (This patch is the 64-bit variant of commit e75ee97224e5, "UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()", 2018-01-31.) The SmmStartup() function executes in SMM, which is very similar to real mode. Add "BITS 16" before it and "BITS 64" after it (just before the @LongMode label). Remove the manual 0x66 operand-size override prefixes, for selecting 32-bit operands -- the sizes of our operands trigger NASM to insert the prefixes automatically in almost every spot. The one place where we have to add it back manually is the LGDT instruction. In the LGDT instruction we also replace the binary 0x2E prefix with the normal NASM syntax for CS segment override. The stores to the Control Registers were always 32-bit wide; the source code only used RAX as source operand because it generated the expected object code (with NASM compiling the source as if in BITS 64). With BITS 16 added, we can use the actual register width in the source operands (EAX). This patch causes NASM to generate byte-identical object code (determined by disassembling both the pre-patch and post-patch versions, and comparing the listings), except: > @@ -231,7 +231,7 @@ > 000001D2 6689D3 mov ebx,edx > 000001D5 66B800000000 mov eax,0x0 > 000001DB 0F22D8 mov cr3,eax > -000001DE 662E670F0155F6 o32 lgdt [cs:ebp-0xa] > +000001DE 2E66670F0155F6 o32 lgdt [cs:ebp-0xa] > 000001E5 66B800000000 mov eax,0x0 > 000001EB 80CC02 or ah,0x2 > 000001EE 0F22E0 mov cr4,eax The only difference is the prefix list order, it changes from: - 0x66, 0x2E, 0x67 to - 0x2E, 0x66, 0x67 Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 01:23:17 +01:00
mov cr4, eax
mov ecx, 0xc0000080 ; IA32_EFER MSR
rdmsr
or ah, BIT0 ; set LME bit
test ebx, BIT20 ; check NXE capability
jz .1
or ah, BIT3 ; set NXE bit
.1:
wrmsr
mov eax, strict dword 0 ; source operand will be patched
ASM_PFX(gPatchSmmCr0):
UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from X64 SmmStartup() (This patch is the 64-bit variant of commit e75ee97224e5, "UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()", 2018-01-31.) The SmmStartup() function executes in SMM, which is very similar to real mode. Add "BITS 16" before it and "BITS 64" after it (just before the @LongMode label). Remove the manual 0x66 operand-size override prefixes, for selecting 32-bit operands -- the sizes of our operands trigger NASM to insert the prefixes automatically in almost every spot. The one place where we have to add it back manually is the LGDT instruction. In the LGDT instruction we also replace the binary 0x2E prefix with the normal NASM syntax for CS segment override. The stores to the Control Registers were always 32-bit wide; the source code only used RAX as source operand because it generated the expected object code (with NASM compiling the source as if in BITS 64). With BITS 16 added, we can use the actual register width in the source operands (EAX). This patch causes NASM to generate byte-identical object code (determined by disassembling both the pre-patch and post-patch versions, and comparing the listings), except: > @@ -231,7 +231,7 @@ > 000001D2 6689D3 mov ebx,edx > 000001D5 66B800000000 mov eax,0x0 > 000001DB 0F22D8 mov cr3,eax > -000001DE 662E670F0155F6 o32 lgdt [cs:ebp-0xa] > +000001DE 2E66670F0155F6 o32 lgdt [cs:ebp-0xa] > 000001E5 66B800000000 mov eax,0x0 > 000001EB 80CC02 or ah,0x2 > 000001EE 0F22E0 mov cr4,eax The only difference is the prefix list order, it changes from: - 0x66, 0x2E, 0x67 to - 0x2E, 0x66, 0x67 Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 01:23:17 +01:00
mov cr0, eax ; enable protected mode & paging
UefiCpuPkg/PiSmmCpuDxeSmm: eliminate "gSmmJmpAddr" and related DBs The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can simply use the NASM syntax for the following Mixed-Size Jump: > jmp PROTECT_MODE_CS : dword @32bit The generated object code for the instruction is unchanged: > 00000182 66EA5A0000000800 jmp dword 0x8:0x5a (The NASM manual explains that putting the DWORD prefix after the colon ":" reflects the intent better, since it is the offset that is a DWORD. Thus, that's what I used. However, both syntaxes are interchangeable, hence the ndisasm output.) The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr"; however that's accidental, not inherent: - Bring LONG_MODE_CODE_SEGMENT from "UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32 version as PROTECT_MODE_CS earlier. - Apply the NASM-native Mixed-Size Jump syntax again, but jump to the fixed zero offset in LONG_MODE_CS. This will produce no relocation record at all. Add a label after the instruction. - Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards from the label. Because we modify the DWORD offset with a DWORD access, the segment selector is unharmed in the instruction, and we need not set it from PiCpuSmmEntry(). According to "objdump --reloc", the X64 version undergoes only the following relocations, after this patch: > RELOCATION RECORDS FOR [.text]: > OFFSET TYPE VALUE > 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004 > 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004 > 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004 Therefore the patch does not regress <https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool chain for UefiCpuPkg with nasm source code"). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 04:12:51 +01:00
jmp LONG_MODE_CS : dword 0 ; offset will be patched to @LongMode
@PatchLongModeOffset:
UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from X64 SmmStartup() (This patch is the 64-bit variant of commit e75ee97224e5, "UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()", 2018-01-31.) The SmmStartup() function executes in SMM, which is very similar to real mode. Add "BITS 16" before it and "BITS 64" after it (just before the @LongMode label). Remove the manual 0x66 operand-size override prefixes, for selecting 32-bit operands -- the sizes of our operands trigger NASM to insert the prefixes automatically in almost every spot. The one place where we have to add it back manually is the LGDT instruction. In the LGDT instruction we also replace the binary 0x2E prefix with the normal NASM syntax for CS segment override. The stores to the Control Registers were always 32-bit wide; the source code only used RAX as source operand because it generated the expected object code (with NASM compiling the source as if in BITS 64). With BITS 16 added, we can use the actual register width in the source operands (EAX). This patch causes NASM to generate byte-identical object code (determined by disassembling both the pre-patch and post-patch versions, and comparing the listings), except: > @@ -231,7 +231,7 @@ > 000001D2 6689D3 mov ebx,edx > 000001D5 66B800000000 mov eax,0x0 > 000001DB 0F22D8 mov cr3,eax > -000001DE 662E670F0155F6 o32 lgdt [cs:ebp-0xa] > +000001DE 2E66670F0155F6 o32 lgdt [cs:ebp-0xa] > 000001E5 66B800000000 mov eax,0x0 > 000001EB 80CC02 or ah,0x2 > 000001EE 0F22E0 mov cr4,eax The only difference is the prefix list order, it changes from: - 0x66, 0x2E, 0x67 to - 0x2E, 0x66, 0x67 Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 01:23:17 +01:00
BITS 64
@LongMode: ; long-mode starts here
mov rsp, strict qword 0 ; source operand will be patched
ASM_PFX(gPatchSmmInitStack):
and sp, 0xfff0 ; make sure RSP is 16-byte aligned
;
; Accoring to X64 calling convention, XMM0~5 are volatile, we need to save
; them before calling C-function.
;
sub rsp, 0x60
movdqa [rsp], xmm0
movdqa [rsp + 0x10], xmm1
movdqa [rsp + 0x20], xmm2
movdqa [rsp + 0x30], xmm3
movdqa [rsp + 0x40], xmm4
movdqa [rsp + 0x50], xmm5
add rsp, -0x20
call ASM_PFX(SmmInitHandler)
add rsp, 0x20
;
; Restore XMM0~5 after calling C-function.
;
movdqa xmm0, [rsp]
movdqa xmm1, [rsp + 0x10]
movdqa xmm2, [rsp + 0x20]
movdqa xmm3, [rsp + 0x30]
movdqa xmm4, [rsp + 0x40]
movdqa xmm5, [rsp + 0x50]
StuffRsb64
rsm
BITS 16
ASM_PFX(gcSmmInitTemplate):
mov ebp, [cs:@L1 - ASM_PFX(gcSmmInitTemplate) + 0x8000]
sub ebp, 0x30000
jmp ebp
@L1:
DQ 0; ASM_PFX(SmmStartup)
ASM_PFX(gcSmmInitSize): DW $ - ASM_PFX(gcSmmInitTemplate)
BITS 64
global ASM_PFX(SmmRelocationSemaphoreComplete)
ASM_PFX(SmmRelocationSemaphoreComplete):
push rax
mov rax, [ASM_PFX(mRebasedFlag)]
mov byte [rax], 1
pop rax
jmp [ASM_PFX(mSmmRelocationOriginalAddress)]
;
; Semaphore code running in 32-bit mode
;
UefiCpuPkg/PiSmmCpuDxeSmm: remove DBs from SmmRelocationSemaphoreComplete32() (1) SmmRelocationSemaphoreComplete32() runs in 32-bit mode, so wrap it in a (BITS 32 ... BITS 64) bracket. (2) SmmRelocationSemaphoreComplete32() currently compiles to: > 000002AE C6050000000001 mov byte [dword 0x0],0x1 > 000002B5 FF2500000000 jmp dword [dword 0x0] where the first instruction is patched with the contents of "mRebasedFlag" (so that (*mRebasedFlag) is set to 1), and the second instruction is patched with the address of "mSmmRelocationOriginalAddress" (so that we jump to "mSmmRelocationOriginalAddress"). In its current form the first instruction could not be patched with PatchInstructionX86(), given that the operand to patch is not encoded in the trailing bytes of the instruction. Therefore, adopt an EAX-based version, inspired by both the IA32 and X64 variants of SmmRelocationSemaphoreComplete(): > 000002AE 50 push eax > 000002AF B800000000 mov eax,0x0 > 000002B4 C60001 mov byte [eax],0x1 > 000002B7 58 pop eax > 000002B8 FF2500000000 jmp dword [dword 0x0] Here both instructions can be patched with PatchInstructionX86(), and the DBs can be replaced with native NASM syntax. (3) Turn the "mRebasedFlagAddr32" and "mSmmRelocationOriginalAddressPtr32" variables into markers that suit PatchInstructionX86(). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 05:36:22 +01:00
BITS 32
global ASM_PFX(SmmRelocationSemaphoreComplete32)
ASM_PFX(SmmRelocationSemaphoreComplete32):
UefiCpuPkg/PiSmmCpuDxeSmm: remove DBs from SmmRelocationSemaphoreComplete32() (1) SmmRelocationSemaphoreComplete32() runs in 32-bit mode, so wrap it in a (BITS 32 ... BITS 64) bracket. (2) SmmRelocationSemaphoreComplete32() currently compiles to: > 000002AE C6050000000001 mov byte [dword 0x0],0x1 > 000002B5 FF2500000000 jmp dword [dword 0x0] where the first instruction is patched with the contents of "mRebasedFlag" (so that (*mRebasedFlag) is set to 1), and the second instruction is patched with the address of "mSmmRelocationOriginalAddress" (so that we jump to "mSmmRelocationOriginalAddress"). In its current form the first instruction could not be patched with PatchInstructionX86(), given that the operand to patch is not encoded in the trailing bytes of the instruction. Therefore, adopt an EAX-based version, inspired by both the IA32 and X64 variants of SmmRelocationSemaphoreComplete(): > 000002AE 50 push eax > 000002AF B800000000 mov eax,0x0 > 000002B4 C60001 mov byte [eax],0x1 > 000002B7 58 pop eax > 000002B8 FF2500000000 jmp dword [dword 0x0] Here both instructions can be patched with PatchInstructionX86(), and the DBs can be replaced with native NASM syntax. (3) Turn the "mRebasedFlagAddr32" and "mSmmRelocationOriginalAddressPtr32" variables into markers that suit PatchInstructionX86(). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 05:36:22 +01:00
push eax
mov eax, strict dword 0 ; source operand will be patched
ASM_PFX(gPatchRebasedFlagAddr32):
mov byte [eax], 1
pop eax
jmp dword [dword 0] ; destination will be patched
ASM_PFX(gPatchSmmRelocationOriginalAddressPtr32):
UefiCpuPkg/PiSmmCpuDxeSmm: remove DBs from SmmRelocationSemaphoreComplete32() (1) SmmRelocationSemaphoreComplete32() runs in 32-bit mode, so wrap it in a (BITS 32 ... BITS 64) bracket. (2) SmmRelocationSemaphoreComplete32() currently compiles to: > 000002AE C6050000000001 mov byte [dword 0x0],0x1 > 000002B5 FF2500000000 jmp dword [dword 0x0] where the first instruction is patched with the contents of "mRebasedFlag" (so that (*mRebasedFlag) is set to 1), and the second instruction is patched with the address of "mSmmRelocationOriginalAddress" (so that we jump to "mSmmRelocationOriginalAddress"). In its current form the first instruction could not be patched with PatchInstructionX86(), given that the operand to patch is not encoded in the trailing bytes of the instruction. Therefore, adopt an EAX-based version, inspired by both the IA32 and X64 variants of SmmRelocationSemaphoreComplete(): > 000002AE 50 push eax > 000002AF B800000000 mov eax,0x0 > 000002B4 C60001 mov byte [eax],0x1 > 000002B7 58 pop eax > 000002B8 FF2500000000 jmp dword [dword 0x0] Here both instructions can be patched with PatchInstructionX86(), and the DBs can be replaced with native NASM syntax. (3) Turn the "mRebasedFlagAddr32" and "mSmmRelocationOriginalAddressPtr32" variables into markers that suit PatchInstructionX86(). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 05:36:22 +01:00
BITS 64
global ASM_PFX(PiSmmCpuSmmInitFixupAddress)
ASM_PFX(PiSmmCpuSmmInitFixupAddress):
lea rax, [@LongMode]
UefiCpuPkg/PiSmmCpuDxeSmm: eliminate "gSmmJmpAddr" and related DBs The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can simply use the NASM syntax for the following Mixed-Size Jump: > jmp PROTECT_MODE_CS : dword @32bit The generated object code for the instruction is unchanged: > 00000182 66EA5A0000000800 jmp dword 0x8:0x5a (The NASM manual explains that putting the DWORD prefix after the colon ":" reflects the intent better, since it is the offset that is a DWORD. Thus, that's what I used. However, both syntaxes are interchangeable, hence the ndisasm output.) The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr"; however that's accidental, not inherent: - Bring LONG_MODE_CODE_SEGMENT from "UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32 version as PROTECT_MODE_CS earlier. - Apply the NASM-native Mixed-Size Jump syntax again, but jump to the fixed zero offset in LONG_MODE_CS. This will produce no relocation record at all. Add a label after the instruction. - Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards from the label. Because we modify the DWORD offset with a DWORD access, the segment selector is unharmed in the instruction, and we need not set it from PiCpuSmmEntry(). According to "objdump --reloc", the X64 version undergoes only the following relocations, after this patch: > RELOCATION RECORDS FOR [.text]: > OFFSET TYPE VALUE > 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004 > 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004 > 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004 Therefore the patch does not regress <https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool chain for UefiCpuPkg with nasm source code"). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
2018-02-02 04:12:51 +01:00
lea rcx, [@PatchLongModeOffset - 6]
mov dword [rcx], eax
lea rax, [ASM_PFX(SmmStartup)]
lea rcx, [@L1]
mov qword [rcx], rax
ret