lkml.org 
[lkml]   [2013]   [Dec]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v2] mm/zswap: change zswap to writethrough cache
From
Hi Dan & Seth,

On Wed, Nov 27, 2013 at 9:28 AM, Dan Streetman <ddstreet@ieee.org> wrote:
> On Mon, Nov 25, 2013 at 1:00 PM, Seth Jennings <sjennings@variantweb.net> wrote:
>> On Fri, Nov 22, 2013 at 11:29:16AM -0600, Seth Jennings wrote:
>>> On Wed, Nov 20, 2013 at 02:49:33PM -0500, Dan Streetman wrote:
>>> > Currently, zswap is writeback cache; stored pages are not sent
>>> > to swap disk, and when zswap wants to evict old pages it must
>>> > first write them back to swap cache/disk manually. This avoids
>>> > swap out disk I/O up front, but only moves that disk I/O to
>>> > the writeback case (for pages that are evicted), and adds the
>>> > overhead of having to uncompress the evicted pages, and adds the
>>> > need for an additional free page (to store the uncompressed page)
>>> > at a time of likely high memory pressure. Additionally, being
>>> > writeback adds complexity to zswap by having to perform the
>>> > writeback on page eviction.
>>> >
>>> > This changes zswap to writethrough cache by enabling
>>> > frontswap_writethrough() before registering, so that any
>>> > successful page store will also be written to swap disk. All the
>>> > writeback code is removed since it is no longer needed, and the
>>> > only operation during a page eviction is now to remove the entry
>>> > from the tree and free it.
>>>
>>> I like it. It gets rid of a lot of nasty writeback code in zswap.
>>>
>>> I'll have to test before I ack, hopefully by the end of the day.
>>>
>>> Yes, this will increase writes to the swap device over the delayed
>>> writeback approach. I think it is a good thing though. I think it
>>> makes the difference between zswap and zram, both in operation and in
>>> application, more apparent. Zram is the better choice for embedded where
>>> write wear is a concern, and zswap being better if you need more
>>> flexibility to dynamically manage the compressed pool.
>>
>> One thing I realized while doing my testing was that making zswap
>> writethrough also impacts synchronous reclaim. Zswap, as it is now,
>> makes the swapcache page clean during swap_writepage() which allows
>> shrink_page_list() to immediately reclaim it. Making zswap writethrough
>> eliminates this advantage and swapcache pages must be scanned again
>> before they can be reclaimed, as is the case with normal swapping.
>
> Yep, I thought about that as well, and it is true, but only while
> zswap is not full. With writeback, once zswap fills up, page stores
> will frequently have to reclaim pages by writing compressed pages to
> disk. With writethrough, the zbud reclaim should be quick, as it only
> has to evict the pages, not write them to disk. So I think basically
> writeback should speed up (compared to no-zswap case) swap_writepage()
> while zswap is not full, but (theoretically) slow it down (compared to
> no-zswap case) while zswap is full, while writethrough should slow
> down swap_writepage() slightly (the time it takes to compress/store
> the page) but consistently, almost the same amount before it's full vs
> when it's full. Theoretically :-) Definitely something to think
> about and test for.
>

Have you gotten any further benchmark result?

--
Thanks,
--Bob


\
 
 \ /
  Last update: 2013-12-11 10:21    [W:0.145 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site