lkml.org 
[lkml]   [2022]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] mm/damon: Make the sampling more accurate
Date
When I try to sample the physical address with DAMON to migrate pages
on tiered memory system, I found it will demote some cold regions mistakenly.
Now we will choose an physical address in the region randomly, but if
its corresponding page is not an online LRU page, we will ignore the
accessing status in this cycle of sampling, and actually will be treated
as a non-accessed region. Suppose a region including some non-LRU pages,
it will be treated as a cold region with a high probability, and may be
merged with adjacent cold regions, but there are some pages may be
accessed we missed.

So instead of ignoring the access status of this region if we did not find
a valid page according to current sampling address, we can use last valid
sampling address to help to make the sampling more accurate, then we can do
a better decision.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
include/linux/damon.h | 2 ++
mm/damon/core.c | 2 ++
mm/damon/paddr.c | 15 ++++++++++++---
3 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/include/linux/damon.h b/include/linux/damon.h
index f23cbfa..3311e15 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -38,6 +38,7 @@ struct damon_addr_range {
* struct damon_region - Represents a monitoring target region.
* @ar: The address range of the region.
* @sampling_addr: Address of the sample for the next access check.
+ * @last_sampling_addr: Last valid address of the sampling.
* @nr_accesses: Access frequency of this region.
* @list: List head for siblings.
* @age: Age of this region.
@@ -50,6 +51,7 @@ struct damon_addr_range {
struct damon_region {
struct damon_addr_range ar;
unsigned long sampling_addr;
+ unsigned long last_sampling_addr;
unsigned int nr_accesses;
struct list_head list;

diff --git a/mm/damon/core.c b/mm/damon/core.c
index c1e0fed..957704f 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -108,6 +108,7 @@ struct damon_region *damon_new_region(unsigned long start, unsigned long end)
region->ar.start = start;
region->ar.end = end;
region->nr_accesses = 0;
+ region->last_sampling_addr = 0;
INIT_LIST_HEAD(&region->list);

region->age = 0;
@@ -848,6 +849,7 @@ static void damon_split_region_at(struct damon_ctx *ctx,
return;

r->ar.end = new->ar.start;
+ r->last_sampling_addr = 0;

new->age = r->age;
new->last_nr_accesses = r->last_nr_accesses;
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 21474ae..5f15068 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -31,10 +31,9 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma,
return true;
}

-static void damon_pa_mkold(unsigned long paddr)
+static void damon_pa_mkold(struct page *page)
{
struct folio *folio;
- struct page *page = damon_get_page(PHYS_PFN(paddr));
struct rmap_walk_control rwc = {
.rmap_one = __damon_pa_mkold,
.anon_lock = folio_lock_anon_vma_read,
@@ -66,9 +65,19 @@ static void damon_pa_mkold(unsigned long paddr)
static void __damon_pa_prepare_access_check(struct damon_ctx *ctx,
struct damon_region *r)
{
+ struct page *page;
+
r->sampling_addr = damon_rand(r->ar.start, r->ar.end);

- damon_pa_mkold(r->sampling_addr);
+ page = damon_get_page(PHYS_PFN(r->sampling_addr));
+ if (page) {
+ r->last_sampling_addr = r->sampling_addr;
+ } else if (r->last_sampling_addr) {
+ r->sampling_addr = r->last_sampling_addr;
+ page = damon_get_page(PHYS_PFN(r->last_sampling_addr));
+ }
+
+ damon_pa_mkold(page);
}

static void damon_pa_prepare_access_checks(struct damon_ctx *ctx)
--
1.8.3.1
\
 
 \ /
  Last update: 2022-03-18 10:23    [W:0.068 / U:0.684 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site