lkml.org 
[lkml]   [2012]   [Jul]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] Cgroup: Fix memory accounting scalability in shrink_page_list
On Fri, Jul 20, 2012 at 04:38:48PM +0200, Michal Hocko wrote:
> On Fri 20-07-12 16:16:25, Johannes Weiner wrote:
> > On Fri, Jul 20, 2012 at 03:53:29PM +0200, Michal Hocko wrote:
> > > On Thu 19-07-12 16:34:26, Tim Chen wrote:
> > > [...]
> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > > index 33dc256..aac5672 100644
> > > > --- a/mm/vmscan.c
> > > > +++ b/mm/vmscan.c
> > > > @@ -779,6 +779,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> > > >
> > > > cond_resched();
> > > >
> > > > + mem_cgroup_uncharge_start();
> > > > while (!list_empty(page_list)) {
> > > > enum page_references references;
> > > > struct address_space *mapping;
> > >
> > > Is this safe? We have a scheduling point few lines below. What prevents
> > > from task move while we are in the middle of the batch?
> >
> > The batch is accounted in task_struct, so moving a batching task to
> > another CPU shouldn't be a problem.
>
> But it could also move to a different group, right?

The batch-uncharging task will remember the memcg of the first page it
processes, then pile every subsequent page belonging to the same memcg
on top. It doesn't matter which group the task is in.


\
 
 \ /
  Last update: 2012-07-20 17:41    [W:0.069 / U:1.256 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site