From: Thomas Gleixner Date: Thu, 23 Sep 2021 16:54:37 +0000 (+0200) Subject: sched: Make cond_resched_*lock() variants consistent vs. might_sleep() X-Git-Url: http://git.maquefel.me/?a=commitdiff_plain;h=7b5ff4bb9adc53cfbf7ac9ba7820ccf0cd7c070a;p=linux.git sched: Make cond_resched_*lock() variants consistent vs. might_sleep() Commit 3427445afd26 ("sched: Exclude cond_resched() from nested sleep test") removed the task state check of __might_sleep() for cond_resched_lock() because cond_resched_lock() is not a voluntary scheduling point which blocks. It's a preemption point which requires the lock holder to release the spin lock. The same rationale applies to cond_resched_rwlock_read/write(), but those were not touched. Make it consistent and use the non-state checking __might_resched() there as well. Signed-off-by: Thomas Gleixner Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20210923165357.991262778@linutronix.de --- diff --git a/include/linux/sched.h b/include/linux/sched.h index b38f002334d59..7a989f2487f88 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2051,14 +2051,14 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock); __cond_resched_lock(lock); \ }) -#define cond_resched_rwlock_read(lock) ({ \ - __might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \ - __cond_resched_rwlock_read(lock); \ +#define cond_resched_rwlock_read(lock) ({ \ + __might_resched(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \ + __cond_resched_rwlock_read(lock); \ }) -#define cond_resched_rwlock_write(lock) ({ \ - __might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \ - __cond_resched_rwlock_write(lock); \ +#define cond_resched_rwlock_write(lock) ({ \ + __might_resched(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \ + __cond_resched_rwlock_write(lock); \ }) static inline void cond_resched_rcu(void)