lkml.org 
[lkml]   [2019]   [Mar]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/1] x86/mm: Fix limit mmap() of /dev/mem to valid physical addresses
From
Date

On 3/23/19 12:02 PM, Thomas Gleixner wrote:
> Ralph,
>
> On Mon, 18 Mar 2019, rcampbell@nvidia.com wrote:
>> From: Ralph Campbell <rcampbell@nvidia.com>
>>
>> If CONFIG_DEBUG_VIRTUAL is enabled, a read or write to /dev/mem can
>> trigger a VIRTUAL_BUG_ON() depending on the value of high_memory.
>> For example:
>>
>> read_mem()
>> valid_phys_addr_range(p=401f1550, count=8)
>> __pa(high_memory)
>> __phys_addr(x=ffffc88000000000)
>> // __START_KERNEL_map = ffffffff80000000
>> // y = ffffc88000000000 - ffffffff80000000
>> VIRTUAL_BUG_ON(phys_addr_valid(400000000000))
>> // boot_cpu_data.x86_phys_bits=46
>
> I have no idea why all the irrelevant information in this example would be
> helpful, but after extracting the meat I think I know what you want to say.
>
>> Since by design high_memory is outside the range of valid physical
>> addresses, use the non-error checking version __pa_nodebug(high_memory).
>
> high_memory is not outside the range of valid physical addresses by
> design. It's only outside when memory is populated right at the end of the
> physical address space.
>
> So what you really want to say in the changelog is:
>
> valid_phys_addr_range() is used to sanity check the physical address range
> of an operation, e.g. access to /dev/mem. It uses __pa(high_memory)
> internally.
>
> If memory is populated at the end of the physical address space, then
> __pa(high_memory) is outside of the physical address space because:
>
> high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1;
>
> For the comparison in valid_phys_addr_range() this is not an issue, but if
> CONFIG_DEBUG_VIRTUAL is enabled, __pa() maps to __phys_addr(), which
> verifies that the resulting physical address is within the valid physical
> address space of the CPU. So in the case that memory is populated at the
> end of the physical address space, this is not true and triggers a
> VIRTUAL_BUG_ON().
>
> Use ... instead, because ...
>
>> Fixes: be62a32044061cb4a3b70a10598e093f1319102e ("x86/mm: Limit mmap() of
>
> Please limit the sha1 to the first 12 characters.
>
>> /dev/mem to valid physical addresses")
>>
>
> No newline between Fixes and the rest please.
>
>> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
>

Thanks for the comments. I'll apply them and send a v2 when ready.

>> --- a/arch/x86/mm/mmap.c
>> +++ b/arch/x86/mm/mmap.c
>> @@ -230,7 +230,7 @@ bool mmap_address_hint_valid(unsigned long addr, unsigned long len)
>> /* Can we access it for direct reading/writing? Must be RAM: */
>> int valid_phys_addr_range(phys_addr_t addr, size_t count)
>> {
>> - return addr + count <= __pa(high_memory);
>> + return addr + count <= __pa_nodebug(high_memory);
>
> This lacks a comment. Aside of that I think there is no point in using
> __pa(high_memory) here. This is all about the physical address range. So
> this can be simply expressed via:
>
> return addr + count <= max_pfn * PAGE_SIZE;
>
> which is much more obvious.
>
> Thanks,
>
> tglx

This looks OK to me for x86_64 but looking at arch/x86/mm/init_32.c,
initmem_init() sets high_memory based on highstart_pfn or max_low_pfn
depending on CONFIG_HIGHMEM. Would using max_pfn in this case work?

\
 
 \ /
  Last update: 2019-03-25 23:04    [W:0.072 / U:0.292 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site