lkml.org 
[lkml]   [2024]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] mm: get the folio's refcnt before clear PG_lru in folio_isolate_lru
Date
From: Zhaoyang Huang <zhaoyang.huang@unisoc.com>

Bellowing race happens when the caller of folio_isolate_lru rely on the
refcnt of page cache. Moving folio_get ahead of folio_test_clear_lru to
make it more robust.

0. Thread_isolate calls folio_isolate_lru by holding one refcnt of page
cache and get preempted before folio_get.
folio_isolate_lru
VM_BUG_ON(!folio->refcnt)
if (folio_test_clear_lru(folio))
<preempted>
folio_get()
1. Thread_release calls release_pages and meet the scenario of the folio
get its refcnt of page cache removed before folio_put_testzero
release_pages
<folio is removed from page cache>
folio_put_testzero(folio) == true
<refcnt added by collection is the only one here and get
deducted>
if(folio_test_clear_lru(folio))
lruvec_del_folio(folio)
<folio failed to be deleted from LRU>
list_add(folio, pages_to_free);
<LRU's integrity is broken by above list_add>

Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
---
mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3ef654addd44..42f15ca06e09 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1731,10 +1731,10 @@ bool folio_isolate_lru(struct folio *folio)

VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio);

+ folio_get(folio);
if (folio_test_clear_lru(folio)) {
struct lruvec *lruvec;

- folio_get(folio);
lruvec = folio_lruvec_lock_irq(folio);
lruvec_del_folio(lruvec, folio);
unlock_page_lruvec_irq(lruvec);
--
2.25.1

\
 
 \ /
  Last update: 2024-05-27 16:10    [W:0.041 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site