lkml.org 
[lkml]   [2022]   [Jan]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount
> > diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
> > index 2e677e6ad09f..fe4864f7f69c 100644
> > --- a/include/linux/page_ref.h
> > +++ b/include/linux/page_ref.h
> > @@ -117,7 +117,10 @@ static inline void init_page_count(struct page *page)
> >
> > static inline void page_ref_add(struct page *page, int nr)
> > {
> > - atomic_add(nr, &page->_refcount);
> > + int old_val = atomic_fetch_add(nr, &page->_refcount);
> > + int new_val = old_val + nr;
> > +
> > + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
>
> This seems somewhat weird, as it will trigger not just on overflow, but also
> if nr is negative. Which I think is valid usage, even though the function
> has 'add' in name, because 'nr' is signed?

I have not found any places in the mainline kernel where nr is
negative in page_ref_add(). I think, by adding this assert we ensure
that when 'add' shows up in backtraces it can be assured that the ref
count has increased, and if page_ref_sub() showed up it means it
decreased. It is strange to have both functions, and yet allow them to
do the opposite. We can also change the type to unsigned.

Pasha

\
 
 \ /
  Last update: 2022-01-27 20:43    [W:0.765 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site