lkml.org 
[lkml]   [2023]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 34/40] lib: code tagging context capture support
    On Mon 01-05-23 09:54:44, Suren Baghdasaryan wrote:
    [...]
    > +static inline void add_ctx(struct codetag_ctx *ctx,
    > + struct codetag_with_ctx *ctc)
    > +{
    > + kref_init(&ctx->refcount);
    > + spin_lock(&ctc->ctx_lock);
    > + ctx->flags = CTC_FLAG_CTX_PTR;
    > + ctx->ctc = ctc;
    > + list_add_tail(&ctx->node, &ctc->ctx_head);
    > + spin_unlock(&ctc->ctx_lock);

    AFAIU every single tracked allocation will get its own codetag_ctx.
    There is no aggregation per allocation site or anything else. This looks
    like a scalability and a memory overhead red flag to me.

    > +}
    > +
    > +static inline void rem_ctx(struct codetag_ctx *ctx,
    > + void (*free_ctx)(struct kref *refcount))
    > +{
    > + struct codetag_with_ctx *ctc = ctx->ctc;
    > +
    > + spin_lock(&ctc->ctx_lock);

    This could deadlock when allocator is called from the IRQ context.

    > + /* ctx might have been removed while we were using it */
    > + if (!list_empty(&ctx->node))
    > + list_del_init(&ctx->node);
    > + spin_unlock(&ctc->ctx_lock);
    > + kref_put(&ctx->refcount, free_ctx);
    --
    Michal Hocko
    SUSE Labs

    \
     
     \ /
      Last update: 2023-05-03 09:36    [W:4.043 / U:0.244 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site