lkml.org 
[lkml]   [2022]   [Nov]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
Subject[syzbot] possible deadlock in __ntfs_clear_inode
From
Hello,

syzbot found the following issue on:

HEAD commit: 4312098baf37 Merge tag 'spi-fix-v6.1-rc6' of git://git.ker..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=16498e3d880000
kernel config: https://syzkaller.appspot.com/x/.config?x=8d01b6e3197974dd
dashboard link: https://syzkaller.appspot.com/bug?extid=5ebb8d0e9b8c47867596
compiler: Debian clang version 13.0.1-++20220126092033+75e33f71c2da-1~exp1~20220126212112.63, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/4b7073d20a37/disk-4312098b.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/36a0367a5593/vmlinux-4312098b.xz
kernel image: https://storage.googleapis.com/syzbot-assets/265bedb3086b/bzImage-4312098b.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5ebb8d0e9b8c47867596@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.1.0-rc6-syzkaller-00012-g4312098baf37 #0 Not tainted
------------------------------------------------------
kswapd0/110 is trying to acquire lock:
ffff888087920100 (&rl->lock){++++}-{3:3}, at: __ntfs_clear_inode+0x32/0x1f0 fs/ntfs/inode.c:2189

but task is already holding lock:
ffffffff8d1ff180 (fs_reclaim){+.+.}-{0:0}, at: arch_static_branch arch/x86/include/asm/jump_label.h:27 [inline]
ffffffff8d1ff180 (fs_reclaim){+.+.}-{0:0}, at: freezing include/linux/freezer.h:36 [inline]
ffffffff8d1ff180 (fs_reclaim){+.+.}-{0:0}, at: try_to_freeze include/linux/freezer.h:54 [inline]
ffffffff8d1ff180 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x109c/0x1c50 mm/vmscan.c:7098

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (fs_reclaim){+.+.}-{0:0}:
lock_acquire+0x182/0x3c0 kernel/locking/lockdep.c:5668
__fs_reclaim_acquire mm/page_alloc.c:4679 [inline]
fs_reclaim_acquire+0x82/0x120 mm/page_alloc.c:4693
might_alloc include/linux/sched/mm.h:271 [inline]
prepare_alloc_pages+0x145/0x5a0 mm/page_alloc.c:5325
__alloc_pages+0x161/0x560 mm/page_alloc.c:5544
folio_alloc+0x1a/0x50 mm/mempolicy.c:2295
filemap_alloc_folio+0x7e/0x1c0 mm/filemap.c:971
do_read_cache_folio+0x28a/0x790 mm/filemap.c:3498
do_read_cache_page mm/filemap.c:3576 [inline]
read_cache_page+0x56/0x270 mm/filemap.c:3585
read_mapping_page include/linux/pagemap.h:756 [inline]
ntfs_map_page fs/ntfs/aops.h:75 [inline]
map_mft_record_page fs/ntfs/mft.c:73 [inline]
map_mft_record+0x1dc/0x610 fs/ntfs/mft.c:156
ntfs_read_locked_inode+0x194/0x47c0 fs/ntfs/inode.c:550
ntfs_iget+0x10f/0x190 fs/ntfs/inode.c:177
ntfs_lookup+0x268/0xdb0 fs/ntfs/namei.c:117
__lookup_slow+0x266/0x3a0 fs/namei.c:1685
lookup_slow+0x53/0x70 fs/namei.c:1702
walk_component+0x2e1/0x410 fs/namei.c:1993
lookup_last fs/namei.c:2450 [inline]
path_lookupat+0x17d/0x450 fs/namei.c:2474
filename_lookup+0x274/0x650 fs/namei.c:2503
user_path_at_empty+0x40/0x1a0 fs/namei.c:2876
user_path_at include/linux/namei.h:57 [inline]
do_sys_truncate+0x94/0x180 fs/open.c:132
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #1 (&ni->mrec_lock){+.+.}-{3:3}:
lock_acquire+0x182/0x3c0 kernel/locking/lockdep.c:5668
__mutex_lock_common+0x1bd/0x26e0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
map_mft_record+0x46/0x610 fs/ntfs/mft.c:154
ntfs_truncate+0x24e/0x2720 fs/ntfs/inode.c:2383
ntfs_truncate_vfs fs/ntfs/inode.c:2862 [inline]
ntfs_setattr+0x2b9/0x3a0 fs/ntfs/inode.c:2914
notify_change+0xe38/0x10f0 fs/attr.c:420
do_truncate+0x1fb/0x2e0 fs/open.c:65
vfs_truncate+0x2af/0x380 fs/open.c:111
do_sys_truncate+0xcb/0x180 fs/open.c:134
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&rl->lock){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain+0x1898/0x6ae0 kernel/locking/lockdep.c:3831
__lock_acquire+0x1292/0x1f60 kernel/locking/lockdep.c:5055
lock_acquire+0x182/0x3c0 kernel/locking/lockdep.c:5668
down_write+0x9c/0x270 kernel/locking/rwsem.c:1562
__ntfs_clear_inode+0x32/0x1f0 fs/ntfs/inode.c:2189
ntfs_evict_big_inode+0x2b6/0x470 fs/ntfs/inode.c:2278
evict+0x2a4/0x620 fs/inode.c:664
dispose_list fs/inode.c:697 [inline]
prune_icache_sb+0x268/0x320 fs/inode.c:896
super_cache_scan+0x362/0x470 fs/super.c:106
do_shrink_slab+0x4e1/0xa00 mm/vmscan.c:842
shrink_slab_memcg+0x2ec/0x630 mm/vmscan.c:911
shrink_slab+0xbe/0x340 mm/vmscan.c:990
shrink_node_memcgs+0x3c3/0x770 mm/vmscan.c:6076
shrink_node+0x299/0x1050 mm/vmscan.c:6105
kswapd_shrink_node mm/vmscan.c:6894 [inline]
balance_pgdat+0xec2/0x1c50 mm/vmscan.c:7084
kswapd+0x2d5/0x590 mm/vmscan.c:7344
kthread+0x266/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306

other info that might help us debug this:

Chain exists of:
&rl->lock --> &ni->mrec_lock --> fs_reclaim

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&ni->mrec_lock);
lock(fs_reclaim);
lock(&rl->lock);

*** DEADLOCK ***

3 locks held by kswapd0/110:
#0: ffffffff8d1ff180 (fs_reclaim){+.+.}-{0:0}, at: arch_static_branch arch/x86/include/asm/jump_label.h:27 [inline]
#0: ffffffff8d1ff180 (fs_reclaim){+.+.}-{0:0}, at: freezing include/linux/freezer.h:36 [inline]
#0: ffffffff8d1ff180 (fs_reclaim){+.+.}-{0:0}, at: try_to_freeze include/linux/freezer.h:54 [inline]
#0: ffffffff8d1ff180 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x109c/0x1c50 mm/vmscan.c:7098
#1: ffffffff8d1d6030 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab_memcg+0xd9/0x630 mm/vmscan.c:884
#2: ffff8880289160e0 (&type->s_umount_key#71){++++}-{3:3}, at: trylock_super fs/super.c:415 [inline]
#2: ffff8880289160e0 (&type->s_umount_key#71){++++}-{3:3}, at: super_cache_scan+0x6a/0x470 fs/super.c:79

stack backtrace:
CPU: 0 PID: 110 Comm: kswapd0 Not tainted 6.1.0-rc6-syzkaller-00012-g4312098baf37 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1b1/0x28e lib/dump_stack.c:106
check_noncircular+0x2cc/0x390 kernel/locking/lockdep.c:2177
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain+0x1898/0x6ae0 kernel/locking/lockdep.c:3831
__lock_acquire+0x1292/0x1f60 kernel/locking/lockdep.c:5055
lock_acquire+0x182/0x3c0 kernel/locking/lockdep.c:5668
down_write+0x9c/0x270 kernel/locking/rwsem.c:1562
__ntfs_clear_inode+0x32/0x1f0 fs/ntfs/inode.c:2189
ntfs_evict_big_inode+0x2b6/0x470 fs/ntfs/inode.c:2278
evict+0x2a4/0x620 fs/inode.c:664
dispose_list fs/inode.c:697 [inline]
prune_icache_sb+0x268/0x320 fs/inode.c:896
super_cache_scan+0x362/0x470 fs/super.c:106
do_shrink_slab+0x4e1/0xa00 mm/vmscan.c:842
shrink_slab_memcg+0x2ec/0x630 mm/vmscan.c:911
shrink_slab+0xbe/0x340 mm/vmscan.c:990
shrink_node_memcgs+0x3c3/0x770 mm/vmscan.c:6076
shrink_node+0x299/0x1050 mm/vmscan.c:6105
kswapd_shrink_node mm/vmscan.c:6894 [inline]
balance_pgdat+0xec2/0x1c50 mm/vmscan.c:7084
kswapd+0x2d5/0x590 mm/vmscan.c:7344
kthread+0x266/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

\
 
 \ /
  Last update: 2022-11-25 11:09    [W:0.082 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site