lkml.org 
[lkml]   [2013]   [Sep]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [BUG REPORT] ZSWAP: theoretical race condition issues
From
On Wed, Sep 25, 2013 at 5:33 PM, Weijie Yang <weijie.yang.kh@gmail.com> wrote:
> On Wed, Sep 25, 2013 at 4:31 PM, Bob Liu <lliubbo@gmail.com> wrote:
>> On Wed, Sep 25, 2013 at 4:09 PM, Weijie Yang <weijie.yang.kh@gmail.com> wrote:
>>> I think I find a new issue, for integrity of this mail thread, I reply
>>> to this mail.
>>>
>>> It is a concurrence issue either, when duplicate store and reclaim
>>> concurrentlly.
>>>
>>> zswap entry x with offset A is already stored in zswap backend.
>>> Consider the following scenario:
>>>
>>> thread 0: reclaim entry x (get refcount, but not call zswap_get_swap_cache_page)
>>>
>>> thread 1: store new page with the same offset A, alloc a new zswap entry y.
>>> store finished. shrink_page_list() call __remove_mapping(), and now
>>> it is not in swap_cache
>>>
>>
>> But I don't think swap layer will call zswap with the same offset A.
>
> 1. store page of offset A in zswap
> 2. some time later, pagefault occur, load page data from zswap.
> But notice that zswap entry x is still in zswap because it is not

Sorry I didn't notice that zswap_frontswap_load() doesn't call rb_erase().

> frontswap_tmem_exclusive_gets_enabled.
> this page is with PageSwapCache(page) and page_private(page) = entry.val
> 3. change this page data, and it become dirty
> 4. some time later again, swap this page on the same offset A.
>
> so, a duplicate store happens.
>

Then I think we should erase the entry from rbtree in zswap_frontswap_load().
After the page is decompressed and loaded from zswap, still storing
the compressed data in zswap is meanless.

--
Regards,
--Bob


\
 
 \ /
  Last update: 2013-09-25 12:21    [W:0.069 / U:0.768 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site