lkml.org 
[lkml]   [2020]   [Dec]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 11/11] mm: enlarge the "int nr_pages" parameter of update_lru_size()
On Mon, 7 Dec 2020, Yu Zhao wrote:

> update_lru_sizes() defines an unsigned long argument and passes it as
> nr_pages to update_lru_size(). Though this isn't causing any overflows
> I'm aware of, it's a bad idea to go through the demotion given that we
> have recently stumbled on a related type promotion problem fixed by
> commit 2da9f6305f30 ("mm/vmscan: fix NR_ISOLATED_FILE corruption on 64-bit")
>
> Note that the underlying counters are already in long. This is another
> reason we shouldn't have the demotion.
>
> This patch enlarges all relevant parameters on the path to the final
> underlying counters:
> update_lru_size(int -> long)
> if memcg:
> __mod_lruvec_state(int -> long)
> if smp:
> __mod_node_page_state(long)
> else:
> __mod_node_page_state(int -> long)
> __mod_memcg_lruvec_state(int -> long)
> __mod_memcg_state(int -> long)
> else:
> __mod_lruvec_state(int -> long)
> if smp:
> __mod_node_page_state(long)
> else:
> __mod_node_page_state(int -> long)
>
> __mod_zone_page_state(long)
>
> if memcg:
> mem_cgroup_update_lru_size(int -> long)
>
> Note that __mod_node_page_state() for the smp case and
> __mod_zone_page_state() already use long. So this change also fixes
> the inconsistency.
>
> Signed-off-by: Yu Zhao <yuzhao@google.com>

NAK from me to this 11/11: I'm running happily with your 1-10 on top of
mmotm (I'll review them n a few days, but currently more concerned with
Rik's shmem huge gfp_mask), but had to leave this one out.

You think you are future-proofing with this, but it is present-breaking.

It looks plausible (though seems random: why these particular functions
use long but others not? why __mod_memcg_state() long, mod_memcg_state()
int?), and I was fooled; but fortunately was still testing with memcg
moving, for Alex's patchset.

Soon got stuck waiting in balance_dirty_pages(), /proc/vmstat showing
nr_anon_pages 2263142822377729
nr_mapped 125095217474159
nr_file_pages 225421358649526
nr_dirty 8589934592
nr_writeback 1202590842920
nr_shmem 40501541678768
nr_anon_transparent_hugepages 51539607554

That last (anon THPs) nothing to do with this patch, but illustrates
what Muchun is fixing in his 1/7 "mm: memcontrol: fix NR_ANON_THPS
accounting in charge moving".

The rest of them could be fixed by changing mem_cgroup_move_account()'s
"unsigned int nr_pages" to "long nr_pages" in this patch, but I think
it's safer just to drop the patch: the promotion of "unsigned int" to
"long" does not work as you would like it to.

I see that mm/vmscan.c contains several "unsigned int" counts of pages,
everything works fine at present so far as I know, and those appeared
to work even with your patch; but I am not confident in my test coverage,
and not confident in us being able to outlaw unsigned int page counts in
future.

Hugh

\
 
 \ /
  Last update: 2020-12-14 22:52    [W:0.190 / U:0.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site