lkml.org 
[lkml]   [2019]   [Mar]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 4/6] x86, mm: make split_mem_range() more easy to read
Wei,

On Thu, 28 Mar 2019, Wei Yang wrote:

please trim your replies. It's annoying if one has to search the content in
the middle of a large useless quote.

> On Sun, Mar 24, 2019 at 03:29:04PM +0100, Thomas Gleixner wrote:
> >Wei,
> >-static int __meminit split_mem_range(struct map_range *mr, int nr_range,
> >- unsigned long start,
> >- unsigned long end)
> >-{
> >- unsigned long start_pfn, end_pfn, limit_pfn;
> >- unsigned long pfn;
> >- int i;
> >+ if (!IS_ALIGNED(mr->end, mi->size)) {
> >+ /* Try to fit as much as possible */
> >+ len = round_down(mr->end - mr->start, mi->size);
> >+ if (!len)
> >+ return false;
> >+ mr->end = mr->start + len;
> >+ }
> >
> >- limit_pfn = PFN_DOWN(end);
> >+ /* Store the effective page size mask */
> >+ mr->page_size_mask = mi->mask;
>
> I don't get the point here. Why store the effective page size mask just for
> the "middle" range.
>
> The original behavior will set the "head" and "tail" range with a lower level
> page size mask.

What has this to do with the middle range? Nothing. This is the path where
the current map level (1g, 2m, 4k) is applied from mr->start to
mr->end. That's the effective mapping of this map_range entry.

Thanks,

tglx

\
 
 \ /
  Last update: 2019-03-28 09:03    [W:0.079 / U:0.580 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site