lkml.org 
[lkml]   [2022]   [Dec]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH v1 2/7] mm/vmalloc.c: add flags to mark vm_map_ram area
    > Through vmalloc API, a virtual kernel area is reserved for physical
    > address mapping. And vmap_area is used to track them, while vm_struct
    > is allocated to associate with the vmap_area to store more information
    > and passed out.
    >
    > However, area reserved via vm_map_ram() is an exception. It doesn't have
    > vm_struct to associate with vmap_area. And we can't recognize the
    > vmap_area with '->vm == NULL' as a vm_map_ram() area because the normal
    > freeing path will set va->vm = NULL before unmapping, please see
    > function remove_vm_area().
    >
    > Meanwhile, there are two types of vm_map_ram area. One is the whole
    > vmap_area being reserved and mapped at one time; the other is the
    > whole vmap_area with VMAP_BLOCK_SIZE size being reserved, while mapped
    > into split regions with smaller size several times via vb_alloc().
    >
    > To mark the area reserved through vm_map_ram(), add flags field into
    > struct vmap_area. Bit 0 indicates whether it's a vm_map_ram area,
    > while bit 1 indicates whether it's a vmap_block type of vm_map_ram
    > area.
    >
    > This is a preparatoin for later use.
    >
    > Signed-off-by: Baoquan He <bhe@redhat.com>
    > ---
    > include/linux/vmalloc.h | 1 +
    > mm/vmalloc.c | 18 +++++++++++++++++-
    > 2 files changed, 18 insertions(+), 1 deletion(-)
    >
    > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
    > index 096d48aa3437..69250efa03d1 100644
    > --- a/include/linux/vmalloc.h
    > +++ b/include/linux/vmalloc.h
    > @@ -76,6 +76,7 @@ struct vmap_area {
    > unsigned long subtree_max_size; /* in "free" tree */
    > struct vm_struct *vm; /* in "busy" tree */
    > };
    > + unsigned long flags; /* mark type of vm_map_ram area */
    > };
    >
    > /* archs that select HAVE_ARCH_HUGE_VMAP should override one or more of these */
    > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
    > index 5d3fd3e6fe09..d6f376060d83 100644
    > --- a/mm/vmalloc.c
    > +++ b/mm/vmalloc.c
    > @@ -1815,6 +1815,7 @@ static void free_vmap_area_noflush(struct vmap_area *va)
    >
    > spin_lock(&vmap_area_lock);
    > unlink_va(va, &vmap_area_root);
    > + va->flags = 0;
    > spin_unlock(&vmap_area_lock);
    >
    This is not a good place to set flags to zero. It looks to me like
    corner and kind of specific.


    > nr_lazy = atomic_long_add_return((va->va_end - va->va_start) >>
    > @@ -1887,6 +1888,10 @@ struct vmap_area *find_vmap_area(unsigned long addr)
    >
    > #define VMAP_BLOCK_SIZE (VMAP_BBMAP_BITS * PAGE_SIZE)
    >
    > +#define VMAP_RAM 0x1
    > +#define VMAP_BLOCK 0x2
    > +#define VMAP_FLAGS_MASK 0x3
    > +
    > struct vmap_block_queue {
    > spinlock_t lock;
    > struct list_head free;
    > @@ -1967,6 +1972,9 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
    > kfree(vb);
    > return ERR_CAST(va);
    > }
    > + spin_lock(&vmap_area_lock);
    > + va->flags = VMAP_RAM|VMAP_BLOCK;
    > + spin_unlock(&vmap_area_lock);
    >
    The per-cpu code was created as a fast per-cpu allocator because of high
    vmalloc lock contention. If possible we should avoid of locking of the
    vmap_area_lock. Because it has a high contention.

    >
    > vaddr = vmap_block_vaddr(va->va_start, 0);
    > spin_lock_init(&vb->lock);
    > @@ -2229,8 +2237,12 @@ void vm_unmap_ram(const void *mem, unsigned int count)
    > return;
    > }
    >
    > - va = find_vmap_area(addr);
    > + spin_lock(&vmap_area_lock);
    > + va = __find_vmap_area((unsigned long)addr, &vmap_area_root);
    > BUG_ON(!va);
    > + if (va)
    > + va->flags &= ~VMAP_RAM;
    > + spin_unlock(&vmap_area_lock);
    > debug_check_no_locks_freed((void *)va->va_start,
    > (va->va_end - va->va_start));
    > free_unmap_vmap_area(va);
    > @@ -2269,6 +2281,10 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
    > if (IS_ERR(va))
    > return NULL;
    >
    > + spin_lock(&vmap_area_lock);
    > + va->flags = VMAP_RAM;
    > + spin_unlock(&vmap_area_lock);
    > +
    >
    Same here.

    --
    Uladzislau Rezki

    \
     
     \ /
      Last update: 2022-12-05 13:56    [W:4.076 / U:0.024 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site