lkml.org 
[lkml]   [2019]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: shmem_recalc_inode: unable to handle kernel NULL pointer dereference
On Sun, 31 Mar 2019, Hugh Dickins wrote:
> On Sun, 31 Mar 2019, Alex Xu (Hello71) wrote:
> > Excerpts from Vineeth Pillai's message of March 25, 2019 6:08 pm:
> > > On Sun, Mar 24, 2019 at 11:30 AM Alex Xu (Hello71) <alex_y_xu@yahoo.ca> wrote:
> > >>
> > >> I get this BUG in 5.1-rc1 sometimes when powering off the machine. I
> > >> suspect my setup erroneously executes two swapoff+cryptsetup close
> > >> operations simultaneously, so a race condition is triggered.
> > >>
> > >> I am using a single swap on a plain dm-crypt device on a MBR partition
> > >> on a SATA drive.
> > >>
> > >> I think the problem is probably related to
> > >> b56a2d8af9147a4efe4011b60d93779c0461ca97, so CCing the related people.
> > >>
> > > Could you please provide more information on this - stack trace, dmesg etc?
> > > Is it easily reproducible? If yes, please detail the steps so that I
> > > can try it inhouse.
> > >
> > > Thanks,
> > > Vineeth
> > >
> >
> > Some info from the BUG entry (I didn't bother to type it all,
> > low-quality image available upon request):
> >
> > BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
> > #PF error: [normal kernel read fault]
> > PGD 0 P4D 0
> > Oops: 0000 [#1] SMP
> > CPU: 0 Comm: swapoff Not tainted 5.1.0-rc1+ #2
> > RIP: 0010:shmem_recalc_inode+0x41/0x90
> >
> > Call Trace:
> > ? shmem_undo_range
> > ? rb_erase_cached
> > ? set_next_entity
> > ? __inode_wait_for_writeback
> > ? shmem_truncate_range
> > ? shmem_evict_inode
> > ? evict
> > ? shmem_unuse
> > ? try_to_unuse
> > ? swapcache_free_entries
> > ? _cond_resched
> > ? __se_sys_swapoff
> > ? do_syscall_64
> > ? entry_SYSCALL_64_after_hwframe
> >
> > As I said, it only occurs occasionally on shutdown. I think it is a safe
> > guess that it can only occur when the swap is not empty, but possibly
> > other conditions are necessary, so I will test further.
>
> Thanks for the update, Alex. I'm looking into a couple of bugs with the
> 5.1-rc swapoff, but this one doesn't look like anything I know so far.
> shmem_recalc_inode() is a surprising place to crash: it's as if the
> igrab() in shmem_unuse() were not working.
>
> Yes, please do send Vineeth and me (or the lists) your low-quality image,
> in case we can extract any more info from it; and also please the
> disassembly of your kernel's shmem_recalc_inode(), so we can be sure of
> exactly what it's crashing on (though I expect that will leave me as
> puzzled as before).
>
> If you want to experiment with one of my fixes, not yet written up and
> posted, just try changing SWAP_UNUSE_MAX_TRIES in mm/swapfile.c from
> 3 to INT_MAX: I don't see how that issue could manifest as crashing in
> shmem_recalc_inode(), but I may just be too stupid to see it.

Thanks for the image and disassembly you sent: which showed that the
ffffffff81117351: 48 83 3f 00 cmpq $0x0,(%rdi)
you are crashing on, is the "if (sbinfo->max_blocks)" in the inlined
shmem_inode_unacct_blocks(): inode->i_sb->s_fs_info is NULL, which is
something that shmem_put_super() does.

Eight-year-old memories stirred: I knew when looking at Vineeth's patch,
that I ought to look back through the history of mm/shmem.c, to check
some points that Konstantin Khlebnikov had made years ago, that
surprised me then and were in danger of surprising us again with this
rework. But I failed to do so: thank you Alex, for reporting this bug
and pointing us back there.

igrab() protects from eviction but does not protect from unmounting.
I bet that is what you are hitting, though I've not even read through
2.6.39's 778dd893ae785 ("tmpfs: fix race between umount and swapoff")
again yet, and not begun to think of the fix for it this time around;
but wanted to let you know that this bug is now (probably) identified.

Hugh

\
 
 \ /
  Last update: 2019-04-03 02:32    [W:0.091 / U:0.580 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site