lkml.org 
[lkml]   [2022]   [Nov]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 2/2] mm/page_alloc: Leave IRQs enabled for per-cpu page allocations
From
On 11/21/22 13:01, Mel Gorman wrote:
> On Fri, Nov 18, 2022 at 03:30:57PM +0100, Vlastimil Babka wrote:
>> On 11/18/22 11:17, Mel Gorman wrote:
>
> While I think you're right, I think it's a bit subtle, the batch reset would
> need to move, rechecked within the "Different zone, different pcp lock."
> block and it would be easy to forget exactly why it's structured like
> that in the future. Rather than being a fix, it could be a standalone
> patch so it would be obvious in git blame but I don't feel particularly
> strongly about it.
>
> For the actual fixes to the patch, how about this? It's boot-tested only
> as I find it hard to believe it would make a difference to performance.

Looks good. Shouldn't make a difference indeed.

>
> --8<--
> mm/page_alloc: Leave IRQs enabled for per-cpu page allocations -fix
>
> As noted by Vlastimil Babka, the migratetype might be wrong if a PCP
> fails to lock so check the migrate type early. Similarly the !pcp check
> is generally unlikely so explicitly tagging it makes sense.
>
> This is a fix for the mm-unstable patch
> mm-page_alloc-leave-irqs-enabled-for-per-cpu-page-allocations.patch
>
> Reported-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
> mm/page_alloc.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 323fec05c4c6..445066617204 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3516,6 +3516,7 @@ void free_unref_page_list(struct list_head *list)
> struct zone *zone = page_zone(page);
>
> list_del(&page->lru);
> + migratetype = get_pcppage_migratetype(page);
>
> /* Different zone, different pcp lock. */
> if (zone != locked_zone) {
> @@ -3530,7 +3531,7 @@ void free_unref_page_list(struct list_head *list)
> */
> pcp_trylock_prepare(UP_flags);
> pcp = pcp_spin_trylock(zone->per_cpu_pageset);
> - if (!pcp) {
> + if (unlikely(!pcp)) {
> pcp_trylock_finish(UP_flags);
> free_one_page(zone, page, page_to_pfn(page),
> 0, migratetype, FPI_NONE);
> @@ -3545,7 +3546,6 @@ void free_unref_page_list(struct list_head *list)
> * Non-isolated types over MIGRATE_PCPTYPES get added
> * to the MIGRATE_MOVABLE pcp list.
> */
> - migratetype = get_pcppage_migratetype(page);
> if (unlikely(migratetype >= MIGRATE_PCPTYPES))
> migratetype = MIGRATE_MOVABLE;
>
>

\
 
 \ /
  Last update: 2022-11-22 10:10    [W:0.421 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site