lkml.org 
[lkml]   [2020]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [GIT PULL] Please pull proc and exec work for 5.7-rc1
From
Date


On 4/4/20 8:34 AM, Bernd Edlinger wrote:
>
>
> On 4/4/20 4:28 AM, Linus Torvalds wrote:
>> On Fri, Apr 3, 2020 at 7:02 PM Waiman Long <longman@redhat.com> wrote:
>>>
>>> So in term of priority, my current thinking is
>>>
>>> upgrading unfair reader > unfair reader > reader/writer
>>>
>>> A higher priority locker will block other lockers from acquiring the lock.
>>
>> An alternative option might be to have readers normally be 100% normal
>> (ie with fairness wrt writers), and not really introduce any special
>> "unfair reader" lock.
>>
>> Instead, all the unfairness would come into play only when the special
>> case - execve() - does it's special "lock for reading with intent to
>> upgrade".
>>
>> But when it enters that kind of "intent to upgrade" lock state, it
>> would not only block all subsequent writers, it would also guarantee
>> that all other readers can continue to go).
>>
>> So then the new rwsem operations would be
>>
>> - read_with_write_intent_lock_interruptible()
>>
>> This is the beginning of "execve()", and waits for all writers to
>> exit, and puts the lock into "all readers can go" mode.
>>
>> You could think of it as a "I'm queuing myself for a write lock,
>> but I'm allowing readers to go ahead" state.
>>
>> - read_lock_to_write_upgrade()
>>
>> This is the "now this turns into a regular write lock". It needs to
>> wait for all other readers to exit, of course.
>>
>> - read_with_write_intent_unlock()
>>
>> This is the "I'm unqueuing myself, I aborted and will not become a
>> write lock after all" operation.
>>
>> NOTE! In this model, there may be multiple threads that do that
>> initial queuing thing. We only guarantee that only one of them will
>> get to the actual write lock stage, and the others will abort before
>> that happens.
>
> One of the problems that add to the current situation, is that sometimes
> the cred_guard_mutex is locked killable, so can be killed by de_thread.
> But in other places cred_guard_mutex is not killable. So cannot be
> locked and cannot be killed either -> dead-lock.
>
>
> But Fear Not!
>
> Overall we are pretty much in a good position to defeat the
> enemy now, once an forever.
>
> - We have my ugly-crazy patch that just works.
>
> - We will have Eric's patch that is even better.
>
> - We can try to put something togeter with creative new rw-type semaphores.
>
> - We can merge ideas from one of the patches to another.
>
>
> So it is impossible we not succeed to fix it this time :-)
>

BTW there is one other independent thing that came to my attention when I tried
to fix the ptrace deadlock from user space, which I tried first.

In order to break the deadlock from user space the strace program would have
to be rewitten to be multi-threaded. I tried that, but the problem was,
that an event from the tracee can be received either in the main thread,
or a signal handler, or another thread. I tried to implement both possibilities,
see my strace-patches which I pointed out previously here.

The signal handler approach completely failed, and the second thread
approach did not completely fail but was just definitely insane.

What really makes it impossible to write a multi-threaded strace program,
is that *only* the tread that made PTRACE_ATTACH can do all the other
PTRACE-APIs, but for a multi-treaded strace, any thread should be able
to call PTRACE-APIs as long as we are in the same process.

I don't know if that is really hard to achieve, but it seems like something
that would allow user space much more flexibility.

What do you think?


Thanks
Bernd.


>
> Bernd.
>
>>
>> If that is a more natural state machine, then that should work fine
>> too. And it has some advantages, in that it keeps the readers normally
>> fair, and only turns them unfair when we get to that special
>> read-for-write stage.
>>
>> But whatever it most natural for the rwsem code. Entirely up to you.
>>
>> Linus
>>

\
 
 \ /
  Last update: 2020-04-05 08:34    [W:1.097 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site