lkml.org 
[lkml]   [2019]   [Dec]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [Xen-devel] [PATCH v12 2/5] xenbus/backend: Protect xenbus callback with lock
From
Date
On 18.12.19 13:42, SeongJae Park wrote:
> On Wed, 18 Dec 2019 13:27:37 +0100 "Jürgen Groß" <jgross@suse.com> wrote:
>
>> On 18.12.19 11:42, SeongJae Park wrote:
>>> From: SeongJae Park <sjpark@amazon.de>
>>>
>>> 'reclaim_memory' callback can race with a driver code as this callback
>>> will be called from any memory pressure detected context. To deal with
>>> the case, this commit adds a spinlock in the 'xenbus_device'. Whenever
>>> 'reclaim_memory' callback is called, the lock of the device which passed
>>> to the callback as its argument is locked. Thus, drivers registering
>>> their 'reclaim_memory' callback should protect the data that might race
>>> with the callback with the lock by themselves.
>>
>> Any reason you don't take the lock around the .probe() and .remove()
>> calls of the backend (xenbus_dev_probe() and xenbus_dev_remove())? This
>> would eliminate the need to do that in each backend instead.
>
> First of all, I would like to keep the critical section as small as possible.
> With my small test, I could see slightly increasing memory pressure as the
> critical section becomes wider. Also, some drivers might share the data their
> 'reclaim_memory' callback touches with other functions. I think only the
> driver owners can know what data is shared and what is the minimum critical
> section to protect it.

But this kind of serialization can still be added on top.

And with the trylock in the reclaim path I believe you can even avoid
the irq variants of the spinlock. But I might be wrong, so you should
try that with lockdep enabled. If it is working there is no harm done
when making the critical section larger, as memory allocations will
work as before.


Juergen

\
 
 \ /
  Last update: 2019-12-18 14:31    [W:0.166 / U:1.964 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site