lkml.org 
[lkml]   [2020]   [Mar]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v3] mm: fix tick timer stall during deferred page init
From
Date
Hi, Pasha,

On 11.03.2020 20:45, Pavel Tatashin wrote:
> On Wed, Mar 11, 2020 at 8:39 AM Shile Zhang
> <shile.zhang@linux.alibaba.com> wrote:
>>
>> When 'CONFIG_DEFERRED_STRUCT_PAGE_INIT' is set, 'pgdatinit' kthread will
>> initialise the deferred pages with local interrupts disabled. It is
>> introduced by commit 3a2d7fa8a3d5 ("mm: disable interrupts while
>> initializing deferred pages").
>>
>> On machine with NCPUS <= 2, the 'pgdatinit' kthread could be bound to
>> the boot CPU, which could caused the tick timer long time stall, system
>> jiffies not be updated in time.
>>
>> The dmesg shown that:
>>
>> [ 0.197975] node 0 initialised, 32170688 pages in 1ms
>>
>> Obviously, 1ms is unreasonable.
>>
>> Now, fix it by restore in the pending interrupts for every 32*1204 pages
>> (128MB) initialized, give the chance to update the systemd jiffies.
>> The reasonable demsg shown likes:
>>
>> [ 1.069306] node 0 initialised, 32203456 pages in 894ms
>
> Sorry for joining late to this thread. I wonder if we could use
> sched_clock() to print this statistics. Or not to print statistics at
> all?

This won't work for all cases since sched_clock() may fall back to jiffies-based
implementation, which gives wrong result, when interrupts are disabled.

And a bigger problem is not a statistics, but it's advancing jiffies. Some parallel
thread may expect jiffies are incrementing, and this will be a surprise for that
another component.

So, this fix is more about modularity and about not introduction a new corner case.

> ==============
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3c4eb750a199..5958f599aced 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1770,7 +1770,7 @@ static int __init deferred_init_memmap(void *data)
> const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id);
> unsigned long spfn = 0, epfn = 0, nr_pages = 0;
> unsigned long first_init_pfn, flags;
> - unsigned long start = jiffies;
> + unsigned long start = sched_clock();
> struct zone *zone;
> int zid;
> u64 i;
> @@ -1817,8 +1817,8 @@ static int __init deferred_init_memmap(void *data)
> /* Sanity check that the next zone really is unpopulated */
> WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone));
>
> - pr_info("node %d initialised, %lu pages in %ums\n",
> - pgdat->node_id, nr_pages, jiffies_to_msecs(jiffies - start));
> + pr_info("node %d initialised, %lu pages in %lldns\n",
> + pgdat->node_id, nr_pages, sched_clock() - start);
>
> pgdat_init_report_one_done();
> return 0;
> ==============
>
> [ 1.245331] node 0 initialised, 10256176 pages in 373565742ns
>
> Pasha
>
>
>
>> Fixes: 3a2d7fa8a3d5 ("mm: disable interrupts while initializing deferred pages").
>>
>> Co-developed-by: Kirill Tkhai <ktkhai@virtuozzo.com>
>> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
>> Signed-off-by: Shile Zhang <shile.zhang@linux.alibaba.com>
>> ---
>> mm/page_alloc.c | 25 ++++++++++++++++++++++---
>> 1 file changed, 22 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 3c4eb750a199..a3a47845e150 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1763,12 +1763,17 @@ deferred_init_maxorder(u64 *i, struct zone *zone, unsigned long *start_pfn,
>> return nr_pages;
>> }
>>
>> +/*
>> + * Release the pending interrupts for every TICK_PAGE_COUNT pages.
>> + */
>> +#define TICK_PAGE_COUNT (32 * 1024)
>> +
>> /* Initialise remaining memory on a node */
>> static int __init deferred_init_memmap(void *data)
>> {
>> pg_data_t *pgdat = data;
>> const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id);
>> - unsigned long spfn = 0, epfn = 0, nr_pages = 0;
>> + unsigned long spfn = 0, epfn = 0, nr_pages = 0, prev_nr_pages = 0;
>> unsigned long first_init_pfn, flags;
>> unsigned long start = jiffies;
>> struct zone *zone;
>> @@ -1779,6 +1784,7 @@ static int __init deferred_init_memmap(void *data)
>> if (!cpumask_empty(cpumask))
>> set_cpus_allowed_ptr(current, cpumask);
>>
>> +again:
>> pgdat_resize_lock(pgdat, &flags);
>> first_init_pfn = pgdat->first_deferred_pfn;
>> if (first_init_pfn == ULONG_MAX) {
>> @@ -1790,7 +1796,6 @@ static int __init deferred_init_memmap(void *data)
>> /* Sanity check boundaries */
>> BUG_ON(pgdat->first_deferred_pfn < pgdat->node_start_pfn);
>> BUG_ON(pgdat->first_deferred_pfn > pgdat_end_pfn(pgdat));
>> - pgdat->first_deferred_pfn = ULONG_MAX;
>>
>> /* Only the highest zone is deferred so find it */
>> for (zid = 0; zid < MAX_NR_ZONES; zid++) {
>> @@ -1809,9 +1814,23 @@ static int __init deferred_init_memmap(void *data)
>> * that we can avoid introducing any issues with the buddy
>> * allocator.
>> */
>> - while (spfn < epfn)
>> + while (spfn < epfn) {
>> nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn);
>> + /*
>> + * Release the interrupts for every TICK_PAGE_COUNT pages
>> + * (128MB) to give tick timer the chance to update the
>> + * system jiffies.
>> + */
>> + if ((nr_pages - prev_nr_pages) > TICK_PAGE_COUNT) {
>> + prev_nr_pages = nr_pages;
>> + pgdat->first_deferred_pfn = spfn;
>> + pgdat_resize_unlock(pgdat, &flags);
>> + goto again;
>> + }
>> + }
>> +
>> zone_empty:
>> + pgdat->first_deferred_pfn = ULONG_MAX;
>> pgdat_resize_unlock(pgdat, &flags);
>>
>> /* Sanity check that the next zone really is unpopulated */
>> --
>> 2.24.0.rc2
>>

\
 
 \ /
  Last update: 2020-03-11 21:22    [W:0.118 / U:0.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site