lkml.org 
[lkml]   [2018]   [Sep]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH v5 4/7] mm, devm_memremap_pages: Add MEMORY_DEVICE_PRIVATE support
    Below is v2 of this patch with v2 it works properly:


    From: Dan Williams <dan.j.williams@intel.com>
    Date: Wed, 12 Sep 2018 19:22:22 -0700
    Subject: [PATCH] mm, devm_memremap_pages: Add MEMORY_DEVICE_PRIVATE support v2
    MIME-Version: 1.0
    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: 8bit

    In preparation for consolidating all ZONE_DEVICE enabling via
    devm_memremap_pages(), teach it how to handle the constraints of
    MEMORY_DEVICE_PRIVATE ranges.

    Changed since v1:
    - move_pfn_range_to_zone() for private device memory too

    Cc: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
    Reported-by: Logan Gunthorpe <logang@deltatee.com>
    Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    ---
    kernel/memremap.c | 53 ++++++++++++++++++++++++++++++++++++-----------
    1 file changed, 41 insertions(+), 12 deletions(-)

    diff --git a/kernel/memremap.c b/kernel/memremap.c
    index 5b3b6f23fd35..adba623a25f4 100644
    --- a/kernel/memremap.c
    +++ b/kernel/memremap.c
    @@ -132,9 +132,15 @@ static void devm_memremap_pages_release(void *data)
    - align_start;

    mem_hotplug_begin();
    - arch_remove_memory(align_start, align_size, pgmap->altmap_valid ?
    - &pgmap->altmap : NULL);
    - kasan_remove_zero_shadow(__va(align_start), align_size);
    + if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
    + pfn = align_start >> PAGE_SHIFT;
    + __remove_pages(page_zone(pfn_to_page(pfn)), pfn,
    + align_size >> PAGE_SHIFT, NULL);
    + } else {
    + arch_remove_memory(align_start, align_size,
    + pgmap->altmap_valid ? &pgmap->altmap : NULL);
    + kasan_remove_zero_shadow(__va(align_start), align_size);
    + }
    mem_hotplug_done();

    untrack_pfn(NULL, PHYS_PFN(align_start), align_size);
    @@ -234,17 +240,40 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap,
    goto err_pfn_remap;

    mem_hotplug_begin();
    - error = kasan_add_zero_shadow(__va(align_start), align_size);
    - if (error) {
    - mem_hotplug_done();
    - goto err_kasan;
    +
    + /*
    + * For device private memory we call add_pages() as we only need to
    + * allocate and initialize struct page for the device memory. More-
    + * over the device memory is un-accessible thus we do not want to
    + * create a linear mapping for the memory like arch_add_memory()
    + * would do.
    + *
    + * For all other device memory types, which are accessible by
    + * the CPU, we do want the linear mapping and thus use
    + * arch_add_memory().
    + */
    + if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
    + error = add_pages(nid, align_start >> PAGE_SHIFT,
    + align_size >> PAGE_SHIFT, NULL, false);
    + } else {
    + error = kasan_add_zero_shadow(__va(align_start), align_size);
    + if (error) {
    + mem_hotplug_done();
    + goto err_kasan;
    + }
    +
    + error = arch_add_memory(nid, align_start, align_size, altmap,
    + false);
    + }
    +
    + if (!error) {
    + struct zone *zone;
    +
    + zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE];
    + move_pfn_range_to_zone(zone, align_start >> PAGE_SHIFT,
    + align_size >> PAGE_SHIFT, altmap);
    }

    - error = arch_add_memory(nid, align_start, align_size, altmap, false);
    - if (!error)
    - move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
    - align_start >> PAGE_SHIFT,
    - align_size >> PAGE_SHIFT, altmap);
    mem_hotplug_done();
    if (error)
    goto err_add_memory;
    --
    2.17.1
    \
     
     \ /
      Last update: 2018-09-18 22:34    [W:4.139 / U:0.080 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site