lkml.org 
[lkml]   [2018]   [Jul]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 1/4] perf tools: Fix struct comm_str removal crash
On Tue, Jul 17, 2018 at 10:49:40AM +0900, Namhyung Kim wrote:
> Hi Jiri,
>
> On Mon, Jul 16, 2018 at 12:29:34PM +0200, Jiri Olsa wrote:
> > On Sun, Jul 15, 2018 at 10:08:27PM +0900, Namhyung Kim wrote:
> >
> > SNIP
> >
> > > > Because thread 2 first decrements the refcnt and only after then it
> > > > removes the struct comm_str from the list, the thread 1 can find this
> > > > object on the list with refcnt equls to 0 and hit the assert.
> > > >
> > > > This patch fixes the thread 2 path, by removing the struct comm_str
> > > > FIRST from the list and only AFTER calling comm_str__put on it. This
> > > > way the thread 1 finds only valid objects on the list.
> > >
> > > I'm not sure we can unconditionally remove the comm_str from the tree.
> > > It should be removed only if refcount is going to zero IMHO.
> > > Otherwise it could end up having multiple comm_str entry for a same
> > > name.
> >
> > right, but it wouldn't crash ;-)
> >
> > how about attached change, that actualy deals with the refcnt
> > race I'm running the tests now, seems ok so far
>
> I think we can keep if the refcount is back to non-zero. What about this?
> (not tested..)
>
>
> static struct comm_str *comm_str__get(cs)
> {
> if (cs)
> refcount_inc_no_warn(&cs->refcnt); // should be added
> return cs;
> }
>
> static void comm_str__put(cs)
> {
> if (cs && refcount_dec_and_test(&cs->refcnt)) {
> down_write(&comm_str_lock);
> /* might race with comm_str__findnew() */
> if (!refcount_read(&cs->refcnt)) {
> rb_erase(&cs->rb_node, &comm_str_root);
> zfree(&cs->str);
> free(cs);
> }
> up_write(&comm_str_lock);
> }
> }

yea, it's more possitive than my patch
I'm testing attached patch, looks good so far

thanks,
jirka


---
diff --git a/tools/include/linux/refcount.h b/tools/include/linux/refcount.h
index 36cb29bc57c2..11e2be6f68a0 100644
--- a/tools/include/linux/refcount.h
+++ b/tools/include/linux/refcount.h
@@ -109,6 +109,14 @@ static inline void refcount_inc(refcount_t *r)
REFCOUNT_WARN(!refcount_inc_not_zero(r), "refcount_t: increment on 0; use-after-free.\n");
}

+/*
+ * Pure refs increase without any chec/warn.
+ */
+static inline void refcount_inc_no_warn(refcount_t *r)
+{
+ atomic_inc(&r->refs);
+}
+
/*
* Similar to atomic_dec_and_test(), it will WARN on underflow and fail to
* decrement when saturated at UINT_MAX.
diff --git a/tools/perf/util/comm.c b/tools/perf/util/comm.c
index 7798a2cc8a86..a2e338cf29d7 100644
--- a/tools/perf/util/comm.c
+++ b/tools/perf/util/comm.c
@@ -21,7 +21,7 @@ static struct rw_semaphore comm_str_lock = {.lock = PTHREAD_RWLOCK_INITIALIZER,}
static struct comm_str *comm_str__get(struct comm_str *cs)
{
if (cs)
- refcount_inc(&cs->refcnt);
+ refcount_inc_no_warn(&cs->refcnt);
return cs;
}

@@ -29,10 +29,12 @@ static void comm_str__put(struct comm_str *cs)
{
if (cs && refcount_dec_and_test(&cs->refcnt)) {
down_write(&comm_str_lock);
- rb_erase(&cs->rb_node, &comm_str_root);
+ if (refcount_read(&cs->refcnt) == 0) {
+ rb_erase(&cs->rb_node, &comm_str_root);
+ zfree(&cs->str);
+ free(cs);
+ }
up_write(&comm_str_lock);
- zfree(&cs->str);
- free(cs);
}
}

\
 
 \ /
  Last update: 2018-07-17 11:03    [W:0.209 / U:0.208 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site