lkml.org 
[lkml]   [2022]   [Dec]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
SubjectRe: [PATCH mm-unstable] mm: clarify folio_set_compound_order() zero support
From
On 12/7/22 17:42, Sidhartha Kumar wrote:
>> Wouldn't it be better to instead just create a new function for that
>> case, such as:
>>
>>      dissolve_large_folio()
>>
>
> Prior to the folio conversion, the helper function __destroy_compound_gigantic_page() did:
>
>     set_compound_order(page, 0);
> #ifdef CONFIG_64BIT
>     page[1].compound_nr = 0;
> #endif
>
> as part of dissolving the page. My goal for this patch was to create a function that would encapsulate that segment of code with a single call of folio_set_compound_order(folio, 0). set_compound_order() does not set compound_nr to 0 when 0 is passed in to the order argument so explicitly setting it is required. I don't think a separate dissolve_large_folio() function for the hugetlb case is needed as __destroy_compound_gigantic_folio() is pretty concise as it is.
>

Instead of "this is abusing function X()" comments, we should prefer
well-named functions that do something understandable. And you can get
that by noticing that folio_set_compound_order() collapses down to
nearly nothing in the special "order 0" case. So just inline that code
directly into __destroy_compound_gigantic_folio(), taking a moment to
fill in and consolidate the CONFIG_64BIT missing parts in mm.h.

And now you can get rid of this cruft and "abuse" comment, and instead
just end up with two simple lines of code that are crystal clear--as
they should be, in a "__destroy" function. Like this:


diff --git a/include/linux/mm.h b/include/linux/mm.h
index 105878936485..cf227ed00945 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1754,6 +1754,7 @@ static inline void set_page_links(struct page *page, enum zone_type zone,
#endif
}

+#ifdef CONFIG_64BIT
/**
* folio_nr_pages - The number of pages in the folio.
* @folio: The folio.
@@ -1764,13 +1765,32 @@ static inline long folio_nr_pages(struct folio *folio)
{
if (!folio_test_large(folio))
return 1;
-#ifdef CONFIG_64BIT
return folio->_folio_nr_pages;
+}
+
+static inline void folio_set_nr_pages(struct folio *folio, long nr_pages)
+{
+ folio->_folio_nr_pages = nr_pages;
+}
#else
+/**
+ * folio_nr_pages - The number of pages in the folio.
+ * @folio: The folio.
+ *
+ * Return: A positive power of two.
+ */
+static inline long folio_nr_pages(struct folio *folio)
+{
+ if (!folio_test_large(folio))
+ return 1;
return 1L << folio->_folio_order;
-#endif
}

+static inline void folio_set_nr_pages(struct folio *folio, long nr_pages)
+{
+}
+#endif
+
/**
* folio_next - Move to the next physical folio.
* @folio: The folio we're currently operating on.
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e3500c087893..b507a98063e6 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1344,7 +1344,8 @@ static void __destroy_compound_gigantic_folio(struct folio *folio,
set_page_refcounted(p);
}

- folio_set_compound_order(folio, 0);
+ folio->_folio_order = 0;
+ folio_set_nr_pages(folio, 0);
__folio_clear_head(folio);
}

Yes?

thanks,
--
John Hubbard
NVIDIA

\
 
 \ /
  Last update: 2022-12-08 03:28    [W:0.065 / U:0.384 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site