lkml.org 
[lkml]   [2020]   [Feb]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH RFC] ext4: fix potential race between online resizing and write operations
    > > > > I was thinking a 2 fold approach (just thinking out loud..):
    > > > >
    > > > > If kfree_call_rcu() is called in atomic context or in any rcu reader, then
    > > > > use GFP_ATOMIC to grow an rcu_head wrapper on the atomic memory pool and
    > > > > queue that.
    > > > >
    > > I am not sure if that is acceptable, i mean what to do when GFP_ATOMIC
    > > gets failed in atomic context? Or we can just consider it as out of
    > > memory and another variant is to say that headless object can be called
    > > from preemptible context only.
    >
    > Yes that makes sense, and we can always put disclaimer in the API's comments
    > saying if this object is expected to be freed a lot, then don't use the
    > headless-API to be extra safe.
    >
    Agree.

    > BTW, GFP_ATOMIC the documentation says if GFP_ATOMIC reserves are depleted,
    > the kernel can even panic some times, so if GFP_ATOMIC allocation fails, then
    > there seems to be bigger problems in the system any way. I would say let us
    > write a patch to allocate there and see what the -mm guys think.
    >
    OK. It might be that they can offer something if they do not like our
    approach. I will try to compose something and send the patch to see.
    The tree.c implementation is almost done, whereas tiny one is on hold.

    I think we should support batching as well as bulk interface there.
    Another way is to workaround head-less object, just to attach the head
    dynamically using kmalloc() and then call_rcu() but then it will not be
    a fair headless support :)

    What is your view?

    > > > > Otherwise, grow an rcu_head on the stack of kfree_call_rcu() and call
    > > > > synchronize_rcu() inline with it.
    > > > >
    > > > >
    > > What do you mean here, Joel? "grow an rcu_head on the stack"?
    >
    > By "grow on the stack", use the compiler-allocated rcu_head on the
    > kfree_rcu() caller's stack.
    >
    > I meant here to say, if we are not in atomic context, then we use regular
    > GFP_KERNEL allocation, and if that fails, then we just use the stack's
    > rcu_head and call synchronize_rcu() or even synchronize_rcu_expedited since
    > the allocation failure would mean the need for RCU to free some memory is
    > probably great.
    >
    Ah, i got it. I thought you meant something like recursion and then
    unwinding the stack back somehow :)

    > > > > Use preemptible() andr task_struct's rcu_read_lock_nesting to differentiate
    > > > > between the 2 cases.
    > > > >
    > > If the current context is preemptable then we can inline synchronize_rcu()
    > > together with freeing to handle such corner case, i mean when we are run
    > > out of memory.
    >
    > Ah yes, exactly what I mean.
    >
    OK.

    > > As for "task_struct's rcu_read_lock_nesting". Will it be enough just
    > > have a look at preempt_count of current process? If we have for example
    > > nested rcu_read_locks:
    > >
    > > <snip>
    > > rcu_read_lock()
    > > rcu_read_lock()
    > > rcu_read_lock()
    > > <snip>
    > >
    > > the counter would be 3.
    >
    > No, because preempt_count is not incremented during rcu_read_lock(). RCU
    > reader sections can be preempted, they just cannot goto sleep in a reader
    > section (unless the kernel is RT).
    >
    So in CONFIG_PREEMPT kernel we can identify if we are in atomic or not by
    using rcu_preempt_depth() and in_atomic(). When it comes to !CONFIG_PREEMPT
    then we skip it and consider as atomic. Something like:

    <snip>
    static bool is_current_in_atomic()
    {
    #ifdef CONFIG_PREEMPT_RCU
    if (!rcu_preempt_depth() && !in_atomic())
    return false;
    #endif

    return true;
    }
    <snip>

    Thanks!

    --
    Vlad Rezki

    \
     
     \ /
      Last update: 2020-02-25 19:55    [W:6.450 / U:0.088 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site