lkml.org 
[lkml]   [2019]   [Mar]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 3/3] Drivers: hv: vmbus: Fix race condition with new ring_buffer_info mutex
On Thu, Mar 14, 2019 at 03:45:33PM -0700, Stephen Hemminger wrote:
> On Thu, 14 Mar 2019 13:05:15 -0700
> "Kimberly Brown" <kimbrownkd@gmail.com> wrote:
>
> > Fix a race condition that can result in a ring buffer pointer being set
> > to null while a "_show" function is reading the ring buffer's data. This
> > problem was discussed here:
> > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flkml.org
> > %2Flkml%2F2018%2F10%2F18%2F779&amp;data=02%7C01%7Csthemmin%40microsoft.com
> > %7C1d7557d667b741bdbb6008d6a8b8620f%7C72f988bf86f141af91ab2d7cd011db47%7C1
> > %7C0%7C636881907217609564&amp;sdata=1bUbLaxsODANM7lCBR8lxyYajNpufuwUW%2FOl
> > vtGu2hU%3D&amp;reserved=0
> >
> > To fix the race condition, add a new mutex lock to the
> > "hv_ring_buffer_info" struct. Add a new function,
> > "hv_ringbuffer_pre_init()", where a channel's inbound and outbound
> > ring_buffer_info mutex locks are initialized.
> >
> > ... snip ...
>
> Adding more locks will solve the problem but it seems like overkill.
> Why not either use a reference count or an RCU style access for the
> ring buffer?

I agree that a reference count or RCU could also solve this problem.
Using mutex locks seemed like the most straightforward solution, but
I'll certainly switch to a different approach if it's better!

Are you concerned about the extra memory required for the mutex locks,
read performance, or something else?

Thanks,
Kim

\
 
 \ /
  Last update: 2019-03-17 02:49    [W:0.080 / U:0.268 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site