lkml.org 
[lkml]   [2022]   [Jan]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount
On Wed, Jan 26, 2022 at 2:45 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Jan 26, 2022 at 02:22:26PM -0500, Pasha Tatashin wrote:
> > On Wed, Jan 26, 2022 at 1:59 PM Matthew Wilcox <willy@infradead.org> wrote:
> > >
> > > On Wed, Jan 26, 2022 at 06:34:21PM +0000, Pasha Tatashin wrote:
> > > > The problems with page->_refcount are hard to debug, because usually
> > > > when they are detected, the damage has occurred a long time ago. Yet,
> > > > the problems with invalid page refcount may be catastrophic and lead to
> > > > memory corruptions.
> > > >
> > > > Reduce the scope of when the _refcount problems manifest themselves by
> > > > adding checks for underflows and overflows into functions that modify
> > > > _refcount.
> > >
> > > If you're chasing a bug like this, presumably you turn on page
> > > tracepoints. So could we reduce the cost of this by putting the
> > > VM_BUG_ON_PAGE parts into __page_ref_mod() et al? Yes, we'd need to
> > > change the arguments to those functions to pass in old & new, but that
> > > should be a cheap change compared to embedding the VM_BUG_ON_PAGE.
> >
> > This is not only about chasing a bug. This also about preventing
> > memory corruption and information leaking that are caused by ref_count
> > bugs from happening.
> > Several months ago a memory corruption bug was discovered by accident:
> > an engineer was studying a process core from a production system and
> > noticed that some memory does not look like it belongs to the original
> > process. We tried to manually reproduce that bug but failed. However,
> > later analysis by our team, explained that the problem occured due to
> > ref_count bug in Linux, and the bug itself was root caused and fixed
> > (mentioned in the cover letter). This work would have prevented
> > similar ref_count bugs from yielding to the memory corruption
> > situation.
>
> But the VM_BUG_ON_PAGE tells us next to nothing useful. To take
> your first example [1] as the kind of thing you say this is going to
> help fix:
>
> 1. Page p is allocated by thread a (refcount 1)
> 2. Thread b gets mistaken pointer to p

Thread b gets a mistaken pointer to p because of a bug in the kernel.
The different types of bugs can lead to such scenarios, and it is
probably not feasible to prevent all of them. However, one of such
scenarios is that we lost control of ref_count, and the page was then
incorrectly remapped or even copied (perhaps migrated) into another
address space.

While studying the logs of the machine on which the double mapping
occured, we noticed that ref_count was underflowed. This was the
smoking gun for the problem, and that is why we concentrated our
search for the root cause of memory leak around places where ref_count
can be incorrectly modified.

This patch series ensures that once we get to a situation where
ref_count is for some reason becomes negative we panic immediately as
there is a possibility that a leak can occur.

The second benefit of this series is that it makes the ref_count
changes contiguous, with this series we never reset the value to 0,
instead we only operate using offsets and add/sub operations. This
helps with tracing the history of ref_count via tracepoints.

> 3. Thread b calls put_page(), __put_page(), page goes to memory
> allocator.
> 4. Thread c calls alloc_page(), also gets page p (refcount 1 again).
> 5. Thread a calls put_page(), __put_page()
> 6. Thread c calls put_page() and gets a VM_BUG_ON_PAGE.
>
> How do we find thread b's involvement? I don't think we can even see
> thread a's involvement in all of this! All we know is a backtrace
> pointing to thread c, who is a completely innocent bystander. I think
> you have to enable page tracepoints to have any shot at finding thread
> b's involvement.

You are right, we cannot get to see thread's involvement, we only get
a panic closer to the damage and hopefully prior to leak occurs.
Again, this is just one of the mitigation techniques. Another one is
this page table check [2].

[2] https://lore.kernel.org/all/20211221154650.1047963-1-pasha.tatashin@soleen.com
>
> [1] https://lore.kernel.org/stable/20211122171825.1582436-1-gthelen@google.com/

\
 
 \ /
  Last update: 2022-01-26 23:42    [W:2.014 / U:0.000 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site