From: Linus Torvalds Date: Tue, 24 May 2022 17:18:23 +0000 (-0700) Subject: Merge tag 'locking-core-2022-05-23' of git://git.kernel.org/pub/scm/linux/kernel... X-Git-Url: http://git.maquefel.me/?a=commitdiff_plain;h=2319be135672f6e45aa937bceaae6c2668c7867c;p=linux.git Merge tag 'locking-core-2022-05-23' of git://git./linux/kernel/git/tip/tip Pull locking updates from Ingo Molnar: - rwsem cleanups & optimizations/fixes: - Conditionally wake waiters in reader/writer slowpaths - Always try to wake waiters in out_nolock path - Add try_cmpxchg64() implementation, with arch optimizations - and use it to micro-optimize sched_clock_{local,remote}() - Various force-inlining fixes to address objdump instrumentation-check warnings - Add lock contention tracepoints: lock:contention_begin lock:contention_end - Misc smaller fixes & cleanups * tag 'locking-core-2022-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/clock: Use try_cmpxchg64 in sched_clock_{local,remote} locking/atomic/x86: Introduce arch_try_cmpxchg64 locking/atomic: Add generic try_cmpxchg64 support futex: Remove a PREEMPT_RT_FULL reference. locking/qrwlock: Change "queue rwlock" to "queued rwlock" lockdep: Delete local_irq_enable_in_hardirq() locking/mutex: Make contention tracepoints more consistent wrt adaptive spinning locking: Apply contention tracepoints in the slow path locking: Add lock contention tracepoints locking/rwsem: Always try to wake waiters in out_nolock path locking/rwsem: Conditionally wake waiters in reader/writer slowpaths locking/rwsem: No need to check for handoff bit if wait queue empty lockdep: Fix -Wunused-parameter for _THIS_IP_ x86/mm: Force-inline __phys_addr_nodebug() x86/kvm/svm: Force-inline GHCB accessors task_stack, x86/cea: Force-inline stack helpers --- 2319be135672f6e45aa937bceaae6c2668c7867c diff --cc arch/x86/include/asm/svm.h index ec623e7da33d5,e255039b36b64..1b07fba11704e --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@@ -566,10 -441,10 +566,10 @@@ struct vmcb /* GHCB Accessor functions */ #define GHCB_BITMAP_IDX(field) \ - (offsetof(struct vmcb_save_area, field) / sizeof(u64)) + (offsetof(struct ghcb_save_area, field) / sizeof(u64)) #define DEFINE_GHCB_ACCESSORS(field) \ - static inline bool ghcb_##field##_is_valid(const struct ghcb *ghcb) \ + static __always_inline bool ghcb_##field##_is_valid(const struct ghcb *ghcb) \ { \ return test_bit(GHCB_BITMAP_IDX(field), \ (unsigned long *)&ghcb->save.valid_bitmap); \