Messages in this thread | | | Subject | Re: [PATCH] arm64/mm: Validate hotplug range before creating linear mapping | From | Anshuman Khandual <> | Date | Mon, 19 Oct 2020 16:53:40 +0530 |
| |
On 10/07/2020 02:09 PM, David Hildenbrand wrote: >>> We do have __add_pages()->check_hotplug_memory_addressable() where we >>> already check against MAX_PHYSMEM_BITS. >> >> Initially, I thought about check_hotplug_memory_addressable() but the >> existing check that asserts end of hotplug wrt MAX_PHYSMEM_BITS, is >> generic in nature. AFAIK the linear mapping problem is arm64 specific, >> hence I was not sure whether to add an arch specific callback which >> will give platform an opportunity to weigh in for these ranges. > > Also on s390x, the range where you can create an identity mapping depends on > - early kernel setup > - kasan > > (I assume it's the same for all archs) > > See arch/s390/mm/vmem.c:vmem_add_mapping(), which contains similar > checks (VMEM_MAX_PHYS).
Once there is a high level function, all these platform specific checks should go in their arch_get_mappable_range() instead.
> >> >> But hold on, check_hotplug_memory_addressable() only gets called from >> __add_pages() after linear mapping creation in arch_add_memory(). How >> would it help ? We need some thing for add_memory(), its variants and >> also possibly for memremap_pages() when it calls arch_add_memory(). >> > > Good point. We chose that place for simplicity when adding it (I was > favoring calling it at two places back then). Now, we might have good > reason to move the checks further up the call chain.
check_hotplug_memory_addressable() check in add_pages() does not add much as linear mapping creation must have been completed by then. I guess moving this check inside the single high level function should be better.
But checking against MAX_PHYSMEM_BITS might no longer be required, as the range would have been validated against applicable memhp_range.
> > Most probably, > > struct range memhp_get_addressable_range(bool need_mapping) > { > ... > }
Something like this...
+struct memhp_range { + u64 start; + u64 end; +}; + +#ifndef arch_get_addressable_range +static inline struct memhp_range arch_get_mappable_range(bool need_mapping) +{ + struct memhp_range range = { + .start = 0UL, + .end = (1ull << (MAX_PHYSMEM_BITS + 1)) - 1, + }; + return range; +} +#endif + +static inline struct memhp_range memhp_get_mappable_range(bool need_mapping) +{ + const u64 max_phys = (1ull << (MAX_PHYSMEM_BITS + 1)) - 1; + struct memhp_range range = arch_get_mappable_range(need_mapping); + + if (range.start > max_phys) { + range.start = 0; + range.end = 0; + } + range.end = min_t(u64, range.end, max_phys); + return range; +} + +static inline bool memhp_range_allowed(u64 start, u64 end, bool need_mapping) +{ + struct memhp_range range = memhp_get_mappable_range(need_mapping); + + return (start <= end) && (start >= range.start) && (end <= range.end); +}
> > Would make sense, to deal with memremap_pages() without identity mappings. > > We have two options: > > 1. Generalize the checks, check early in applicable functions. Have a > single way to get applicable ranges, both in callers, and inside the > functions. Inside the functions, check_hotplug_memory_addressable() in add_pages() ? We could just drop that. Single generalized check with an arch callback makes more sense IMHO.
> > 2. Keep the checks where they are. Add memhp_get_addressable_range() so > callers can figure limits out. It's less clear what the relation between > the different checks is. And it's likely if things change at one place > that we miss the other place.
Right, does not sound like a good idea :)
> >>> struct range memhp_get_addressable_range(void) >>> { >>> const u64 max_phys = (1ull << (MAX_PHYSMEM_BITS + 1)) - 1; >>> struct range range = arch_get_mappable_range(); >> >> What would you suggest as the default fallback range if a platform >> does not define this callback. > > Just the largest possible range until we implement them. IIRC, an s390x > version should be easy to add.
[0UL...(1ull << (MAX_PHYSMEM_BITS + 1)) - 1] is the largest possible hotplug range.
> >> >>> >>> if (range.start > max_phys) { >>> range.start = 0; >>> range.end = 0; >>> } >>> range.end = max_t(u64, range.end, max_phys); >> >> min_t instead ? > > Yeah :) > >> >>> >>> return range; >>> } >>> >>> >>> That, we can use in check_hotplug_memory_addressable(), and also allow >>> add_memory*() users to make use of it. >> >> So this check would happen twice during a hotplug ? > > Right now it's like calling a function with wrong arguments - you just > don't have a clue what valid arguments are, because non-obvious errors > (besides -ENOMEM, which is a temporary error) pop up deep down the call > chain. > > For example, virito-mem would use it to detect during device > initialization the usable device range, and warn the user accordingly. > It currently manually checks for MAX_PHYSMEM_BITS, but that's just ugly. > Failing at random add_memory() calls (permanently!) is not so nice. > > In case of DIMMs, we could use it to detect if adding parts of a DIMM > won't work (and warn the user early). We could try to add as much as > possible.
Agreed.
Planning to add memhp_range_allowed() check in add_memory(), __add_memory(), add_memory_driver_managed() and memremap_pages(). This check might just get called twice depending on the hotplug path. Wondering if this needs to be added any where else ?
| |