lkml.org 
[lkml]   [2020]   [Jun]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm/spase: never partially remove memmap for early section
On Wed, Jun 24, 2020 at 09:47:37AM +0800, Baoquan He wrote:
>On 06/23/20 at 05:21pm, Dan Williams wrote:
>> On Tue, Jun 23, 2020 at 2:43 AM Wei Yang
>> <richard.weiyang@linux.alibaba.com> wrote:
>> >
>> > For early sections, we assumes its memmap will never be partially
>> > removed. But current behavior breaks this.
>>
>> Where do we assume that?
>>
>> The primary use case for this was mapping pmem that collides with
>> System-RAM in the same 128MB section. That collision will certainly be
>> depopulated on-demand depending on the state of the pmem device. So,
>> I'm not understanding the problem or the benefit of this change.
>
>I was also confused when review this patch, the patch log is a little
>short and simple. From the current code, with SPARSE_VMEMMAP enabled, we
>do build memmap for the whole memory section during boot, even though
>some of them may be partially populated. We just mark the subsection map
>for present pages.
>
>Later, if pmem device is mapped into the partially boot memory section,
>we just fill the relevant subsection map, do return directly, w/o building
>the memmap for it, in section_activate(). Because the memmap for the
>unpresent RAM part have been there. I guess this is what Wei is trying to
>do to keep the behaviour be consistent for pmem device adding, or
>pmem device removing and later adding again.
>
>Please correct me if I am wrong.

You are right here.

>
>To me, fixing it looks good. But a clear doc or code comment is
>necessary so that people can understand the code with less time.
>Leaving it as is doesn't cause harm. I personally tend to choose
>the former.
>

The former is to add a clear doc?

> paging_init()
> ->sparse_init()
> ->sparse_init_nid()
> {
> ...
> for_each_present_section_nr(pnum_begin, pnum) {
> ...
> map = __populate_section_memmap(pfn, PAGES_PER_SECTION,
> nid, NULL);
> ...
> }
> }
> ...
> ->zone_sizes_init()
> ->free_area_init()
> {
> for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
> subsection_map_init(start_pfn, end_pfn - start_pfn);
> }
> {
>
> __add_pages()
> ->sparse_add_section()
> ->section_activate()
> {
> ...
> fill_subsection_map();
> if (nr_pages < PAGES_PER_SECTION && early_section(ms)) <----------*********
> return pfn_to_page(pfn);
> ...
> }
>>

--
Wei Yang
Help you, Help me

\
 
 \ /
  Last update: 2020-06-24 05:47    [W:0.068 / U:1.280 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site