lkml.org 
[lkml]   [2019]   [Mar]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH v3 3/3] Drivers: hv: vmbus: Fix race condition with new ring_buffer_info mutex
Date
From: Kimberly Brown <kimbrownkd@gmail.com>  Sent: Wednesday, March 20, 2019 8:48 PM
> > > > Adding more locks will solve the problem but it seems like overkill.
> > > > Why not either use a reference count or an RCU style access for the
> > > > ring buffer?
> > >
> > > I agree that a reference count or RCU could also solve this problem.
> > > Using mutex locks seemed like the most straightforward solution, but
> > > I'll certainly switch to a different approach if it's better!
> > >
> > > Are you concerned about the extra memory required for the mutex locks,
> > > read performance, or something else?
> >
> > Locks in control path are ok, but my concern is performance of the
> > data path which puts packets in/out of rings. To keep reasonable performance,
> > no additional locking should be added in those paths.
> >
> > So if data path is using RCU, can/should the control operations also
> > use it?
>
> The data path doesn't use RCU to protect the ring buffers.

My $.02: The mutex is obtained only in the sysfs path and the "delete
ringbuffers" path, neither of which is performance or concurrency sensitive.
There's no change to any path that reads or writes data from/to the ring
buffers. It seems like the mutex is the most straightforward solution to
preventing sysfs from accessing the ring buffer info while the memory is
being freed as part of "delete ringbuffers".

Michael

\
 
 \ /
  Last update: 2019-03-21 17:05    [W:0.086 / U:0.412 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site