lkml.org 
[lkml]   [2022]   [Apr]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock
On Tue, Apr 26, 2022 at 12:24:56PM -0700, Minchan Kim wrote:
> > @@ -3450,10 +3496,19 @@ void free_unref_page(struct page *page, unsigned int order)
> > void free_unref_page_list(struct list_head *list)
> > {
> > struct page *page, *next;
> > + struct per_cpu_pages *pcp;
> > + struct zone *locked_zone;
> > unsigned long flags;
> > int batch_count = 0;
> > int migratetype;
> >
> > + /*
> > + * An empty list is possible. Check early so that the later
> > + * lru_to_page() does not potentially read garbage.
> > + */
> > + if (list_empty(list))
> > + return;
> > +
> > /* Prepare pages for freeing */
> > list_for_each_entry_safe(page, next, list, lru) {
> > unsigned long pfn = page_to_pfn(page);
> > @@ -3474,8 +3529,26 @@ void free_unref_page_list(struct list_head *list)
> > }
> > }
> >
> > + VM_BUG_ON(in_hardirq());
>
> You need to check the list_empty here again and bail out if it is.
>

You're right, every page could have failed to prepare or was isolated.

--
Mel Gorman
SUSE Labs

\
 
 \ /
  Last update: 2022-04-29 11:06    [W:0.502 / U:1.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site