locking/rwsem: Disable preemption in all down_read*() and up_read() code paths
authorWaiman Long <longman@redhat.com>
Thu, 26 Jan 2023 00:36:26 +0000 (19:36 -0500)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Fri, 10 Mar 2023 08:39:03 +0000 (09:39 +0100)
commit3f5ec3c335ddbfc5b5a823e42e2b5843af9be6c0
treec53109b5151d044f8b9f5fd15d340760f3b27889
parentab4d47a343da0dd2378684be91415f63f10a1c0a
locking/rwsem: Disable preemption in all down_read*() and up_read() code paths

[ Upstream commit 3f5245538a1964ae186ab7e1636020a41aa63143 ]

Commit:

  91d2a812dfb9 ("locking/rwsem: Make handoff writer optimistically spin on owner")

... assumes that when the owner field is changed to NULL, the lock will
become free soon. But commit:

  48dfb5d2560d ("locking/rwsem: Disable preemption while trying for rwsem lock")

... disabled preemption when acquiring rwsem for write.

However, preemption has not yet been disabled when acquiring a read lock
on a rwsem.  So a reader can add a RWSEM_READER_BIAS to count without
setting owner to signal a reader, got preempted out by a RT task which
then spins in the writer slowpath as owner remains NULL leading to live lock.

One easy way to fix this problem is to disable preemption at all the
down_read*() and up_read() code paths as implemented in this patch.

Fixes: 91d2a812dfb9 ("locking/rwsem: Make handoff writer optimistically spin on owner")
Reported-by: Mukesh Ojha <quic_mojha@quicinc.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20230126003628.365092-3-longman@redhat.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
kernel/locking/rwsem.c