lkml.org 
[lkml]   [2014]   [Nov]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH for v3.18] mm/compaction: skip the range until proper target pageblock is met
On Fri, Oct 31, 2014 at 12:42:42PM +0100, Vlastimil Babka wrote:
> On 10/31/2014 08:23 AM, Joonsoo Kim wrote:
> >commit 7d49d8868336 ("mm, compaction: reduce zone checking frequency in
> >the migration scanner") makes side-effect that change iteration
> >range calculation. Before change, block_end_pfn is calculated using
> >start_pfn, but, now, blindly add pageblock_nr_pages to previous value.
> >
> >This cause the problem that isolation_start_pfn is larger than
> >block_end_pfn when we isolation the page with more than pageblock order.
> >In this case, isolation would be failed due to invalid range parameter.
> >
> >To prevent this, this patch implement skipping the range until proper
> >target pageblock is met. Without this patch, CMA with more than pageblock
> >order always fail, but, with this patch, it will succeed.
>
> Well, that's a shame, a third fix you send for my series... And only
> the first was caught before going mainline. I guess -rcX phase is
> intended for this, but how could we do better to catch this in
> -next?
> Anyway, thanks!

Yeah, I'd like to catch these in -next. :)
It'd be better to have CMA test cases in kernel tree or mmtest.
I have some CMA test program, but, it is really ad-hoc so I can't
submit it. If time allows, I update it and try to submit it.

>
> >Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> >---
> > mm/compaction.c | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> >diff --git a/mm/compaction.c b/mm/compaction.c
> >index ec74cf0..212682a 100644
> >--- a/mm/compaction.c
> >+++ b/mm/compaction.c
> >@@ -472,18 +472,20 @@ isolate_freepages_range(struct compact_control *cc,
> > pfn = start_pfn;
> > block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
> >
> >- for (; pfn < end_pfn; pfn += isolated,
> >- block_end_pfn += pageblock_nr_pages) {
> >+ for (; pfn < end_pfn; block_end_pfn += pageblock_nr_pages) {
> > /* Protect pfn from changing by isolate_freepages_block */
> > unsigned long isolate_start_pfn = pfn;
> >
> > block_end_pfn = min(block_end_pfn, end_pfn);
> >+ if (pfn >= block_end_pfn)
> >+ continue;
>
> Without any comment, this will surely confuse anyone reading the code.
> Also I wonder if just recalculating block_end_pfn wouldn't be
> cheaper cpu-wise (not that it matters much?) and easier to
> understand than conditionals. IIRC backward jumps (i.e. continue)
> are by default predicted as "likely" if there's no history in the
> branch predictor cache, but this rather unlikely?

I also think that comment is needed and conditional would be better
than above. I will rework it.

> > if (!pageblock_pfn_to_page(pfn, block_end_pfn, cc->zone))
> > break;
> >
> > isolated = isolate_freepages_block(cc, &isolate_start_pfn,
> > block_end_pfn, &freelist, true);
> >+ pfn += isolated;
>
> Moving the "pfn += isolated" here doesn't change anything, or does
> it? Do you just find it nicer?

When skipping, we should not do 'pfn += isolated'. There are two
choice achiving it. 1) reset isolated to 0. 2) above change.
I just selected 2) one. Maybe next version uses 1) approach.

Thanks.


\
 
 \ /
  Last update: 2014-11-03 10:01    [W:0.045 / U:0.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site