lkml.org 
[lkml]   [2022]   [Jul]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4] arm64: mm: fix linear mem mapping access performance degradation
On Tue, Jul 05, 2022 at 06:02:02PM +0300, Mike Rapoport wrote:
> +void __init remap_crashkernel(void)
> +{
> +#ifdef CONFIG_KEXEC_CORE
> + phys_addr_t start, end, size;
> + phys_addr_t aligned_start, aligned_end;
> +
> + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
> + return;
> +
> + if (!crashk_res.end)
> + return;
> +
> + start = crashk_res.start & PAGE_MASK;
> + end = PAGE_ALIGN(crashk_res.end);
> +
> + aligned_start = ALIGN_DOWN(crashk_res.start, PUD_SIZE);
> + aligned_end = ALIGN(end, PUD_SIZE);
> +
> + /* Clear PUDs containing crash kernel memory */
> + unmap_hotplug_range(__phys_to_virt(aligned_start),
> + __phys_to_virt(aligned_end), false, NULL);

What I don't understand is what happens if there's valid kernel data
between aligned_start and crashk_res.start (or the other end of the
range).

--
Catalin

\
 
 \ /
  Last update: 2022-07-05 17:35    [W:0.140 / U:0.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site