lkml.org 
[lkml]   [2022]   [Nov]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/5] zsmalloc: Consolidate zs_pool's migrate_lock and size_class's locks
On (22/11/03 11:18), Johannes Weiner wrote:
> > > I'm not in love with this, to be honest. One big pool lock instead
> > > of 255 per-class locks doesn't look attractive, as one big pool lock
> > > is going to be hammered quite a lot when zram is used, e.g. as a regular
> > > block device with a file system and is under heavy parallel writes/reads.
>
> TBH the class always struck me as an odd scope to split the lock. Lock
> contention depends on how variable the compression rate is of the
> hottest incoming data, which is unpredictable from a user POV.
>
> My understanding is that the primary usecase for zram is swapping, and
> the pool lock is the same granularity as the swap locking.

That's what we thought until a couple of merge windows ago we figured
(the hard way) that SUSE uses ZRAM as a normal block device with a real
file-system on it. And they use it often enough to immediately spot the
regression which we landed.

> Do you have a particular one in mind? (I'm thinking journaled ones are
> not of much interest, since their IO tends to be fairly serialized.)
>
> btrfs?

Probably some parallel fio workloads? Seq, random reads/writes from
numerous workers.

I personally sometimes use ZRAM when I want to compile something and
I care only about the package, I don't need .o for recomplilation or
something, just the final package.

\
 
 \ /
  Last update: 2022-11-04 04:59    [W:0.085 / U:0.908 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site