lkml.org 
[lkml]   [2016]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/2] pipe: make pipe user buffer limit checks more precise
From
Date
On 08/16/2016 10:21 PM, Michael Kerrisk (man-pages) wrote:
>>> @@ -1132,8 +1136,8 @@ long pipe_fcntl(struct file *file, unsigned int cmd, unsigned long arg)
>>> if (!capable(CAP_SYS_RESOURCE) && size > pipe_max_size) {
>>> ret = -EPERM;
>>> goto out;
>>> - } else if ((too_many_pipe_buffers_hard(pipe->user) ||
>>> - too_many_pipe_buffers_soft(pipe->user)) &&
>>> + } else if ((too_many_pipe_buffers_hard(pipe->user, nr_pages) ||
>>> + too_many_pipe_buffers_soft(pipe->user, nr_pages)) &&
>>> !capable(CAP_SYS_RESOURCE) &&
>>> !capable(CAP_SYS_ADMIN)) {
>>> ret = -EPERM;
>>>
>>
>> Isn't there also a race where two or more concurrent pipe()/fnctl()
>> calls can together push us over the limits before the accounting is done?
>
> I guess there is!
>
>> I think there really ought to be a check after doing the accounting if
>> we really want to be meticulous here.
>
> Let me confirm what I understand from your comment: because of the race,
> then a user could subvert the checks and allocate an arbitrary amount
> of kernel memory for pipes. Right?
>
> I'm not sure what you mean by "a check after doing the accounting". Is not the
> only solution here some kind of lock around the check+accounting steps?

Instead of doing atomic_long_read() in the check + atomic_long_add() for
accounting we could do a single speculative atomic_long_add_return() and
then if it goes above the limit we can lower it again with atomic_sub()
when aborting the operation (if it doesn't go above the limit we don't
need to do anything).


Vegard

\
 
 \ /
  Last update: 2016-09-17 09:57    [W:1.206 / U:1.432 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site