lkml.org 
[lkml]   [2020]   [Apr]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 0/2] mm: Two small fixes for recent syzbot reports
On Tue, Apr 14, 2020 at 12:06 AM Qian Cai <cai@lca.pw> wrote:
> > On Apr 9, 2020, at 7:29 PM, Stephen Rothwell <sfr@canb.auug.org.au> wrote:
> > Hi Linus,
> >
> > On Thu, 9 Apr 2020 09:32:32 -0700 Linus Torvalds <torvalds@linux-foundation.org> wrote:
> >>
> >> On Thu, Apr 9, 2020 at 5:55 AM Dmitry Vyukov <dvyukov@google.com> wrote:
> >>>
> >>> linux-next is boot-broken for more than a month and bugs are piling
> >>> onto bugs, I've seen at least 3 different ones.
> >>> syzbot can't get any working linux-next build for testing for a very
> >>> long time now.
> >>
> >> Ouch.
> >>
> >> Ok, that's not good. It means that linux-next has basically only done
> >> build-testing this whole cycle.
> >
> > Well, there are other CI's beyond syzbot .. Does syzbot only build/test
> > a single kernel arch/config?
> >
> >> Stephen, Dmitry - is there some way linux-next could possibly kick out
> >> trees more aggressively if syzbot can't even boot?

Hello all,

Sorry for corona/holiday-delays. I will try to answer/comment on all
things in this thread in this email.

AI: we need to CC linux-next@ on linux-next build/boot failures. I
will work on this.
We have functionality to CC given emails on _all_ bugs on the given
tree, but we don't have this for build/boot bugs only. I will try to
add this soon.
Stephen, do you want to be CCed as well? Or just linux-next@?

> So old bugs generally should be aged out

This actually happens now.
Bugs without reproducers are auto-closed after 60-120 days since last
occurrence (based on past frequency). And for linux-next the range is
40-60 days.
Bugs with reproducers are not auto-closed. But they are fix bisected
and cause bisected, both of which are only ~66% correct, but still
frequently provide a useful signal. Also bugs with reproducers are
just generally easier to handle.

Another important distinction from Bugzilla is that syzbot dashboard
has up-to-date "Last crash time" information. Click on the "Last"
column here:
https://syzkaller.appspot.com/upstream
It's very easy to ignore everything that happened months ago for
starters, if that's the concern.

So it's not as perfect as it would be with a dedicated human team
attached, but I would say it's now in a reasonable shape with ~400
open bugs that happened within the last month.
And now we have data to confirm that "old" does not mean "irrelevant".
Our leader:
BUG: please report to dccp@vger.kernel.org => prev = 0, last = 0 at
net/dccp/ccids/lib/packet_history.c:LINE/tfrc_rx_hist_sample_rtt()
https://syzkaller.appspot.com/bug?id=0881c535c265ca965edc49c0ac3d0a9850d26eb1
was first triggered 964 days ago, but pretty much still there all that time.

> It would be nice if there was some way we could triage Syzkaller
> bugs into different buckets.

Though, yes, I am afraid of stepping onto the slippery slope of
implementing a full-fledged bug tracking system, I think syzbot will
gather more bug tracker features and tags will happen. We still have
https://github.com/google/syzkaller/issues/608 open and it's mainly
the question of allocating resources for implementation and figuring
out the actual tags hierarchy.
For login and credentials, I guess we will go with just "whoever can
send emails is a root" because we are doing this already anyways
(closing a bug is more critical than changing a tag) :)

Re panic_on_warn.
We don't have a dedicated engineer to sheriff and give manual
consideration and judgement to each case. And as Qian noted, in such
circumstances it's reasonable to don't trust anything after a warning.
Some notorious examples: LOCKDEP warnings disable LOCKDEP; so if we
boot in such state with eyes closed and then try to do fuzzing, or
"better" test a patch for a LOCKDEP error, or do bisection of a
LOCKDEP error, we will immediately give bogus testing results or
bisection culprit.
Or a warning about hung task may re-appear later during testing and
confuse results again.
Or if we ignore KASAN warning, we boot potentially with corrupted
memory with who-knows-what consequences.
A "normal" WARNING may be benign (misuse of WARNING), or maybe not.
Impossible to figure out automatically. And in the end, if we ignore
that, who/when will notice and fix that?
We get this far with this black-and-white criteria for kernel bugs. I
think it had some positive effects on a number of areas, as we go
forward I think it's better to extend panic_on_warn to more testing
systems. Then non-fatal bugs will be no different from fatal bugs
during boot, which we need to handle in a reasonable timeframe anyway.
Which gets me to the next "interesting" point.

> Well, there are other CI's beyond syzbot.
> On the other hand, this makes me worry who is testing on linux-next every day.

How do these use-after-free's and locking bugs get past the
unit-testing systems (which syzbot is not) and remain unnoticed for so
long?...
syzbot uses the dumbest VMs (GCE), so everything it triggers during
boot should be triggerable pretty much everywhere.
It seems to be an action point for the testing systems. "Boot to ssh"
is not the best criteria. Again if there is a LOCKDEP error, we are
not catching any more LOCKDEP errors during subsequent testing. If
there is a use-after-free, that's a serious error on its own and KASAN
produces only 1 error by default as well. And as far as I understand,
lots of kernel testing systems don't even enable KASAN, which is very
wrong.
I've talked to +Dan Rue re this few days ago. Hopefully LKFT will
start catching these as part of unit testing. Which should help with
syzbot testing as well.

\
 
 \ /
  Last update: 2020-04-14 13:13    [W:0.153 / U:1.868 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site