Messages in this thread | | | Subject | Re: [PATCH v4 14/16] locking/rwsem: Guard against making count negative | From | Waiman Long <> | Date | Thu, 18 Apr 2019 10:08:28 -0400 |
| |
On 04/18/2019 09:51 AM, Peter Zijlstra wrote: > On Sat, Apr 13, 2019 at 01:22:57PM -0400, Waiman Long wrote: >> inline void __down_read(struct rw_semaphore *sem) >> { >> + long count = atomic_long_fetch_add_acquire(RWSEM_READER_BIAS, >> + &sem->count); >> + >> + if (unlikely(count & RWSEM_READ_FAILED_MASK)) { >> + rwsem_down_read_failed(sem, count); >> DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem); >> } else { >> rwsem_set_reader_owned(sem); > *groan*, that is not provably correct. It is entirely possible to get > enough fetch_add()s piled on top of one another to overflow regardless. > > Unlikely, yes, impossible, no. > > This makes me nervious as heck, I really don't want to ever have to > debug something like that :-(
The number of fetch_add() that can pile up is limited by the number of CPUs available in the system. Yes, if you have a 32k processor system that have all the CPUs trying to acquire the same read-lock, we will have a problem. Or as Linus had said that if we could have tasks kept preempted right after doing the fetch_add with newly scheduled tasks doing the fetch_add at the same lock again, we could have overflow with less CPUs. How about disabling preemption before fetch_all and re-enable it afterward to address the latter concern? I have no solution for the first case, though.
Cheers, Longman
| |