lkml.org 
[lkml]   [2018]   [Feb]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v2 3/3] fs: fsnotify: account fsnotify metadata to kmemcg
On Thu, Feb 22, 2018 at 6:48 AM, Jan Kara <jack@suse.cz> wrote:
> On Thu 22-02-18 14:49:44, Michal Hocko wrote:
>> On Tue 20-02-18 19:01:01, Shakeel Butt wrote:
>> > A lot of memory can be consumed by the events generated for the huge or
>> > unlimited queues if there is either no or slow listener. This can cause
>> > system level memory pressure or OOMs. So, it's better to account the
>> > fsnotify kmem caches to the memcg of the listener.
>>
>> How much memory are we talking about here?
>
> 32 bytes per event (on 64-bit) which is small but the number of events is
> not limited in any way (if the creator uses a special flag and has
> CAP_SYS_ADMIN). In the thread [1] a guy from Alibaba wanted this feature so
> among cloud people there is apparently some demand to have a way to limit
> memory usage of such application...

Yes, I'm the guy from Alibaba :-) We did run into such issue
occasionally, then I proposed the patch to account fsnotify kmem in
memcg although we fixed the bug in user space applications later.
However, such accounting still sounds useful to me.

>
>> > There are seven fsnotify kmem caches and among them allocations from
>> > dnotify_struct_cache, dnotify_mark_cache, fanotify_mark_cache and
>> > inotify_inode_mark_cachep happens in the context of syscall from the
>> > listener. So, SLAB_ACCOUNT is enough for these caches.
>> >
>> > The objects from fsnotify_mark_connector_cachep are not accounted as
>> > they are small compared to the notification mark or events and it is
>> > unclear whom to account connector to since it is shared by all events
>> > attached to the inode.
>> >
>> > The allocations from the event caches happen in the context of the event
>> > producer. For such caches we will need to remote charge the allocations
>> > to the listener's memcg. Thus we save the memcg reference in the
>> > fsnotify_group structure of the listener.
>>
>> Is it typical that the listener lives in a different memcg and if yes
>> then cannot this cause one memcg to OOM/DoS the one with the listener?
>
> We have been through these discussions already in [1] back in November :).
> I can understand the wish to limit memory usage of an application using
> unlimited fanotify queues. And yes, it may mean that it will be easier for
> an attacker to get it oom-killed (currently the malicious app would drive
> the whole system oom which will presumably take a bit more effort as there
> is more memory to consume). But then I expect this is what admin prefers
> when he limits memory usage of fanotify listener.
>
> I cannot tell how common it is for producer and listener to be in different
> memcgs. From Alibaba request it seems it happens...

For our usecase, we didn't have producers and listeners in the
different memcgs (Please see the original discussion here
https://lkml.org/lkml/2017/10/20/819). The different memcg accounting
problem is raised by Amir since the accounting might be unfair if the
listeners don't consume events and heuristic if producer and listener
are in the different memcgs.

However, we don't have strong demand on this from our perspective for
the time being. So, I didn't continue to move forward on this
approach.

Regards,
Yang


>
> Honza
>
> [1] https://lkml.org/lkml/2017/10/27/523
> --
> Jan Kara <jack@suse.com>
> SUSE Labs, CR
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

\
 
 \ /
  Last update: 2018-02-22 17:44    [W:0.048 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site