lkml.org 
[lkml]   [2012]   [Jul]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] Cgroup: Fix memory accounting scalability in shrink_page_list
On Fri, Jul 20, 2012 at 03:53:29PM +0200, Michal Hocko wrote:
> On Thu 19-07-12 16:34:26, Tim Chen wrote:
> [...]
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 33dc256..aac5672 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -779,6 +779,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> >
> > cond_resched();
> >
> > + mem_cgroup_uncharge_start();
> > while (!list_empty(page_list)) {
> > enum page_references references;
> > struct address_space *mapping;
>
> Is this safe? We have a scheduling point few lines below. What prevents
> from task move while we are in the middle of the batch?

The batch is accounted in task_struct, so moving a batching task to
another CPU shouldn't be a problem.


\
 
 \ /
  Last update: 2012-07-20 17:01    [W:0.081 / U:0.472 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site