lkml.org 
[lkml]   [2021]   [Dec]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: BUG: KASAN: use-after-free in dec_rlimit_ucounts
On Sun, Dec 19, 2021 at 11:58:41PM -0600, Eric W. Biederman wrote:
> Qian Cai <quic_qiancai@quicinc.com> writes:
>
> > On Wed, Nov 24, 2021 at 04:49:19PM -0500, Qian Cai wrote:
> >> Hmm, I don't know if that or it is just this platfrom is lucky to trigger
> >> the race condition quickly, but I can't reproduce it on x86 so far. I am
> >> Cc'ing a few arm64 people to see if they have spot anything I might be
> >> missing. The original bug report is here:
> >>
> >> https://lore.kernel.org/lkml/YZV7Z+yXbsx9p3JN@fixkernel.com/
> >
> > Okay, I am finally able to reproduce this on x86_64 with the latest
> > mainline as well by setting CONFIG_USER_NS and KASAN on the top of
> > defconfig (I did not realize it did not select CONFIG_USER_NS in the first
> > place). Anyway, it still took less than 5-minute by running:
> >
> > $ trinity -C 48
>
> It took me a while to get to the point of reproducing this but I can
> confirm I see this with 2 core VM, running 5.16.0-rc4.
>
> Running trinity 2019.06 packaged in debian 11.

I still can't reproduce :(

> I didn't watch so I don't know if it was 5 minutes but I do know it took
> less than an hour.

--- a/kernel/ucount.c
+++ b/kernel/ucount.c
@@ -209,6 +209,7 @@ void put_ucounts(struct ucounts *ucounts)

if (atomic_dec_and_lock_irqsave(&ucounts->count, &ucounts_lock, flags)) {
hlist_del_init(&ucounts->node);
+ ucounts->ns = NULL;
spin_unlock_irqrestore(&ucounts_lock, flags);
kfree(ucounts);
}
Does the previous hack increase the likelihood of an error being
triggered?

> Now I am puzzled why there are not other reports of problems.
>
> Now to start drilling down to figure out why the user namespace was
> freed early.
> ----
>
> The failure I got looked like:
> > BUG: KASAN: use-after-free in dec_rlimit_ucounts+0x7b/0xb0
> > Read of size 8 at addr ffff88800b7dd018 by task trinity-c3/67982
> >
> > CPU: 1 PID: 67982 Comm: trinity-c3 Tainted: G O 5.16.0-rc4 #1
> > Hardware name: Xen HVM domU, BIOS 4.8.5-35.fc25 08/25/2021
> > Call Trace:
> > <TASK>
> > dump_stack_lvl+0x48/0x5e
> > print_address_descrtion.constprop.0+0x1f/0x140
> > ? dec_rlimit_ucounts+0x7b/0xb0
> > ? dec_rlimit_ucounts+0x7b/0xb0
> > kasan_report.cold+0x7f/0xe0
> > ? _raw_spin_lock+0x7f/0x11b
> > ? dec_rlimit_ucounts+0x7b/0xb0
> > dec_rlimit_ucounts+0x7b/0xb0
> > mqueue_evict_inode+0x417/0x590
> > ? perf_trace_global_dirty_state+0x350/0x350
> > ? __x64_sys_mq_unlink+0x250/0x250
> > ? _raw_spin_lock_bh+0xe0/0xe0
> > ? _raw_spin_lock_bh+0xe0/0xe0
> > evict+0x155/0x2a0
> > __x64_sys_mq_unlink+0x1a7/0x250
> > do_syscall_64+0x3b/0x90
> > entry_SYSCALL_64_after_hwframe+0x44/0xae
> > RIP: 0033:0x7f0505ebc9b9
> > Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 00 0f 1f 44 00 00 48 89 ....
> >
> > Allocated by task 67717
> > Freed by task 6027
> >
> > The buggy address belongs to the object at ffff88800b7dce38
> > which belongs to the cache user_namespace of size 600
> > The buggy address is located 480 bytes inside of
> > 600-byte region [ffff88800b7dce38, ffff88800b7dd090]
> > The buggy address belongs to the page:
> >
> > trinity: Detected kernel tainting. Last seed was 1891442794
>
> Eric
>

--
Rgrds, legion

\
 
 \ /
  Last update: 2021-12-21 14:10    [W:0.090 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site