lkml.org 
[lkml]   [2020]   [May]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v10 06/16] mm/damon: Split regions into 4 subregions if possible
Date
From: SeongJae Park <sjpark@amazon.de>

Suppose that there are a very wide and cold region and a hot region, and
both regions are identified by DAMON. And then, the middle small region
inside the very wide and cold region becomes hot. DAMON will not be
able to identify this new region because the adaptive regions adjustment
mechanism splits each region to only two subregions.

This commit modifies the logic to split each region to 4 subregions if
possible so that such problematic region can eventually identified.

Suggested-by: James Cameron <Jonathan.Cameron@Huawei.com>
Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
mm/damon.c | 44 +++++++++++++++++++++++++++-----------------
1 file changed, 27 insertions(+), 17 deletions(-)

diff --git a/mm/damon.c b/mm/damon.c
index cec946197e13..342f905927a0 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -650,26 +650,32 @@ static void damon_split_region_at(struct damon_ctx *ctx,
damon_insert_region(new, r, damon_next_region(r));
}

-/* Split every region in the given task into two randomly-sized regions */
-static void damon_split_regions_of(struct damon_ctx *ctx, struct damon_task *t)
+/* Split every region in the given task into 'nr_subs' regions */
+static void damon_split_regions_of(struct damon_ctx *ctx,
+ struct damon_task *t, int nr_subs)
{
struct damon_region *r, *next;
- unsigned long sz_orig_region, sz_left_region;
+ unsigned long sz_region, sz_sub = 0;
+ int i;

damon_for_each_region_safe(r, next, t) {
- sz_orig_region = r->vm_end - r->vm_start;
-
- /*
- * Randomly select size of left sub-region to be at least
- * 10 percent and at most 90% of original region
- */
- sz_left_region = ALIGN_DOWN(damon_rand(1, 10) * sz_orig_region
- / 10, MIN_REGION);
- /* Do not allow blank region */
- if (sz_left_region == 0 || sz_left_region >= sz_orig_region)
- continue;
-
- damon_split_region_at(ctx, r, sz_left_region);
+ sz_region = r->vm_end - r->vm_start;
+
+ for (i = 0; i < nr_subs - 1 &&
+ sz_region > 2 * MIN_REGION; i++) {
+ /*
+ * Randomly select size of left sub-region to be at
+ * least 10 percent and at most 90% of original region
+ */
+ sz_sub = ALIGN_DOWN(damon_rand(1, 10) *
+ sz_region / 10, MIN_REGION);
+ /* Do not allow blank region */
+ if (sz_sub == 0 || sz_sub >= sz_region)
+ continue;
+
+ damon_split_region_at(ctx, r, sz_sub);
+ sz_region = sz_sub;
+ }
}
}

@@ -687,14 +693,18 @@ static void kdamond_split_regions(struct damon_ctx *ctx)
{
struct damon_task *t;
unsigned int nr_regions = 0;
+ int nr_subregions = 2;

damon_for_each_task(ctx, t)
nr_regions += nr_damon_regions(t);
if (nr_regions > ctx->max_nr_regions / 2)
return;

+ if (nr_regions < ctx->max_nr_regions / 4)
+ nr_subregions = 4;
+
damon_for_each_task(ctx, t)
- damon_split_regions_of(ctx, t);
+ damon_split_regions_of(ctx, t, nr_subregions);
}

/*
--
2.17.1
\
 
 \ /
  Last update: 2020-05-05 13:12    [W:0.097 / U:1.612 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site