lkml.org 
[lkml]   [2021]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 1/2] zram: fix crashes due to use of cpu hotplug multistate
On Mon, Apr 05, 2021 at 07:00:23PM +0000, Luis Chamberlain wrote:
> On Mon, Apr 05, 2021 at 10:07:24AM -0700, Minchan Kim wrote:
> > On Thu, Apr 01, 2021 at 11:59:25PM +0000, Luis Chamberlain wrote:
> > > And come to think of it the last patch I had sent with a new
> > > DECLARE_RWSEM(zram_unload) also has this same issue making most
> > > sysfs attributes rather fragile.
> >
> > Thanks for looking the way. I agree the single zram_index_rwlock is
> > not the right approach to fix it. However, I still hope we find more
> > generic solution to fix them at once since I see it's zram instance
> > racing problem.
>
> They are 3 separate different problems. Related, but different.

What are 3 different problems? I am asking since I remember only two:
one for CPU multistate and the other one for sysfs during rmmod.
The missing one from my view would help why you are saying they
are all different problems(even though it's a bit releated).

> At this point I think it would be difficult to resolve all 3
> with one solution without creating side issues, but hey,
> I'm curious if you find a solution that does not create other
> issues.

Yeah, let me try something with kobject stuff how I could go far
but let me understand what I was missing from problems your mentioned
above.

>
> > A approach I am considering is to make struct zram include kobject
> > and then make zram sysfs auto populated under the kobject. So, zram/sysfs
> > lifetime should be under the kobject. With it, sysfs race probem I
> > mentioned above should be gone. Furthermore, zram_remove should fail
> > if one of the alive zram objects is existing
> > (i.e., zram->kobject->refcount > 1) so module_exit will fail, too.
>
> If the idea then is to busy out rmmod if a sysfs attribute is being
> read, that could then mean rmmod can sometimes never complete. Hogging
> up / busying out sysfs attributes means the module cannto be removed.

It's true but is it a big problem? There are many cases that system
just return error if it's busy and rely on the admin. IMHO, rmmod should
be part of them.

> Which is why the *try_module_get()* I think is much more suitable, as
> it will always fails if we're already going down.

How does the try_module_get solved the problem? I thought it's
general problem on every sysfs knobs in zram. Do you mean
you will add module_get/put for every sysfs knobs in zram?

>
> > I see one of the problems is how I could make new zram object's
> > attribute group for zram knobs under /sys/block/zram0 since block
> > layer already made zram0 kobject via device_add_disk.
>
> Right.. well the syfs attribute races uncovered here actually do
> apply to any block driver as well. And which is why I was aiming
> for something generic if possible.

It would be great but that's not the one we have atm so want to
proceed to fix anyway.

>
> I am not sure if you missed the last hunks of the generic solution,
> but that would resolve the issue you noted. Here is the same approach
> but in a non-generic solution, specific to just one attribute so far
> and to zram:

So idea is refcount of the block_device's inode and module_exit path
checks also the inode refcount to make rmmod failure?
Hmm, might work but I am not sure it's right layerm, too.

>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 494695ff227e..b566916e4ad9 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -1724,6 +1724,11 @@ static ssize_t disksize_store(struct device *dev,
>
> mutex_lock(&zram_index_mutex);
>
> + if (!bdgrab(dev_to_bdev(dev))) {
> + err = -ENODEV;
> + goto out_nodev;
> + }
> +
> if (!zram_up || zram->claim) {
> err = -ENODEV;
> goto out;
> @@ -1760,6 +1765,7 @@ static ssize_t disksize_store(struct device *dev,
> zram->disksize = disksize;
> set_capacity_and_notify(zram->disk, zram->disksize >> SECTOR_SHIFT);
> up_write(&zram->init_lock);
> + bdput(dev_to_bdev(dev);
>
> mutex_unlock(&zram_index_mutex);
>
> @@ -1770,6 +1776,8 @@ static ssize_t disksize_store(struct device *dev,
> out_unlock:
> up_write(&zram->init_lock);
> out:
> + bdput(dev_to_bdev(dev);
> +out_nodev:
> mutex_unlock(&zram_index_mutex);
> return err;
> }

\
 
 \ /
  Last update: 2021-04-05 21:58    [W:0.079 / U:0.380 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site