lkml.org 
[lkml]   [2020]   [Nov]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: scheduling while atomic in z3fold
On Sat, Nov 28, 2020 at 03:05:24PM +0100, Oleksandr Natalenko wrote:
> Hi.
>
> While running v5.10-rc5-rt11 I bumped into the following:
>
> ```
> BUG: scheduling while atomic: git/18695/0x00000002
> Preemption disabled at:
> [<ffffffffbb93fcb3>] z3fold_zpool_malloc+0x463/0x6e0
> …
> Call Trace:
> dump_stack+0x6d/0x88
> __schedule_bug.cold+0x88/0x96
> __schedule+0x69e/0x8c0
> preempt_schedule_lock+0x51/0x150
> rt_spin_lock_slowlock_locked+0x117/0x2c0
> rt_spin_lock_slowlock+0x58/0x80
> rt_spin_lock+0x2a/0x40
> z3fold_zpool_malloc+0x4c1/0x6e0
> zswap_frontswap_store+0x39c/0x980
> __frontswap_store+0x6e/0xf0
> swap_writepage+0x39/0x70
> shmem_writepage+0x31b/0x490
> pageout+0xf4/0x350
> shrink_page_list+0xa28/0xcc0
> shrink_inactive_list+0x300/0x690
> shrink_lruvec+0x59a/0x770
> shrink_node+0x2d6/0x8d0
> do_try_to_free_pages+0xda/0x530
> try_to_free_pages+0xff/0x260
> __alloc_pages_slowpath.constprop.0+0x3d5/0x1230
> __alloc_pages_nodemask+0x2f6/0x350
> allocate_slab+0x3da/0x660
> ___slab_alloc+0x4ff/0x760
> __slab_alloc.constprop.0+0x7a/0x100
> kmem_cache_alloc+0x27b/0x2c0
> __d_alloc+0x22/0x230
> d_alloc_parallel+0x67/0x5e0
> __lookup_slow+0x5c/0x150
> path_lookupat+0x2ea/0x4d0
> filename_lookup+0xbf/0x210
> vfs_statx.constprop.0+0x4d/0x110
> __do_sys_newlstat+0x3d/0x80
> do_syscall_64+0x33/0x40
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
> ```
>
> The preemption seems to be disabled here:
>
> ```
> $ scripts/faddr2line mm/z3fold.o z3fold_zpool_malloc+0x463
> z3fold_zpool_malloc+0x463/0x6e0:
> add_to_unbuddied at mm/z3fold.c:645
> (inlined by) z3fold_alloc at mm/z3fold.c:1195
> (inlined by) z3fold_zpool_malloc at mm/z3fold.c:1737
> ```
>
> The call to the rt_spin_lock() seems to be here:
>
> ```
> $ scripts/faddr2line mm/z3fold.o z3fold_zpool_malloc+0x4c1
> z3fold_zpool_malloc+0x4c1/0x6e0:
> add_to_unbuddied at mm/z3fold.c:649
> (inlined by) z3fold_alloc at mm/z3fold.c:1195
> (inlined by) z3fold_zpool_malloc at mm/z3fold.c:1737
> ```
>
> Or, in source code:
>
> ```
> 639 /* Add to the appropriate unbuddied list */
> 640 static inline void add_to_unbuddied(struct z3fold_pool *pool,
> 641 struct z3fold_header *zhdr)
> 642 {
> 643 if (zhdr->first_chunks == 0 || zhdr->last_chunks == 0 ||
> 644 zhdr->middle_chunks == 0) {
> 645 struct list_head *unbuddied = get_cpu_ptr(pool->unbuddied);
> 646
> 647 int freechunks = num_free_chunks(zhdr);
> 648 spin_lock(&pool->lock);
> 649 list_add(&zhdr->buddy, &unbuddied[freechunks]);
> 650 spin_unlock(&pool->lock);
> 651 zhdr->cpu = smp_processor_id();
> 652 put_cpu_ptr(pool->unbuddied);
> 653 }
> 654 }
> ```
>
> Shouldn't the list manipulation be protected with
> local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock?
>
> Thanks.
>
> --
> Oleksandr Natalenko (post-factum)

Forgot to Cc linux-rt-users@, sorry.

--
Oleksandr Natalenko (post-factum)

\
 
 \ /
  Last update: 2020-11-28 23:40    [W:0.087 / U:0.476 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site