x86, kvm: use proper ASM macros for kvm_vcpu_is_preempted
authorGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 30 Jun 2022 10:19:47 +0000 (12:19 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sat, 2 Jul 2022 14:41:12 +0000 (16:41 +0200)
The build rightfully complains about:
arch/x86/kernel/kvm.o: warning: objtool: __raw_callee_save___kvm_vcpu_is_preempted()+0x12: missing int3 after ret

because the ASM_RET call is not being used correctly in kvm_vcpu_is_preempted().

This was hand-fixed-up in the kvm merge commit a4cfff3f0f8c ("Merge branch
'kvm-older-features' into HEAD") which of course can not be backported to
stable kernels, so just fix this up directly instead.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/x86/kernel/kvm.c

index 9e3af56747e8f05678a7a917657b38976c889859..eba6485a59a3926c6e72c69e5b2acaa84b61a2e7 100644 (file)
@@ -948,7 +948,7 @@ asm(
 "movq  __per_cpu_offset(,%rdi,8), %rax;"
 "cmpb  $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
 "setne %al;"
-"ret;"
+ASM_RET
 ".size __raw_callee_save___kvm_vcpu_is_preempted, .-__raw_callee_save___kvm_vcpu_is_preempted;"
 ".popsection");