lkml.org 
[lkml]   [2019]   [Mar]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/4] signal: Make flush_sigqueue() use free_q to release memory
From
Date
On 03/22/2019 07:16 AM, Oleg Nesterov wrote:
> On 03/21, Matthew Wilcox wrote:
>> On Thu, Mar 21, 2019 at 05:45:10PM -0400, Waiman Long wrote:
>>
>>> To avoid this dire condition and reduce lock hold time of tasklist_lock,
>>> flush_sigqueue() is modified to pass in a freeing queue pointer so that
>>> the actual freeing of memory objects can be deferred until after the
>>> tasklist_lock is released and irq re-enabled.
>> I think this is a really bad solution. It looks kind of generic,
>> but isn't. It's terribly inefficient, and all it's really doing is
>> deferring the debugging code until we've re-enabled interrupts.
> Agreed.

Thanks for looking into that. As I am not knowledgeable enough about the
signal handling code path, I choose the lowest risk approach of not
trying to change the code flow while deferring memory deallocation after
releasing the tasklist_lock.

>> We'd be much better off just having a list_head in the caller
>> and list_splice() the queue->list onto that caller. Then call
>> __sigqueue_free() for each signal on the queue.
> This won't work, note the comment which explains the race with sigqueue_free().
>
> Let me think about it... at least we can do something like
>
> close_the_race_with_sigqueue_free(struct sigpending *queue)
> {
> struct sigqueue *q, *t;
>
> list_for_each_entry_safe(q, t, ...) {
> if (q->flags & SIGQUEUE_PREALLOC)
> list_del_init(&q->list);
> }
>
> called with ->siglock held, tasklist_lock is not needed.
>
> After that flush_sigqueue() can be called lockless in release_task() release_task.
>
> I'll try to make the patch tomorrow.
>
> Oleg.
>
I am looking forward to it.

Thanks,
Longman


\
 
 \ /
  Last update: 2019-03-22 17:11    [W:0.072 / U:0.884 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site