Messages in this thread | | | Date | Tue, 16 Nov 2021 23:27:56 -0800 | From | Minchan Kim <> | Subject | Re: [RFC PATCH] kernfs: release kernfs_mutex before the inode allocation |
| |
On Wed, Nov 17, 2021 at 07:44:44AM +0100, Greg Kroah-Hartman wrote: > On Tue, Nov 16, 2021 at 01:36:01PM -0800, Minchan Kim wrote: > > On Tue, Nov 16, 2021 at 08:49:46PM +0100, Greg Kroah-Hartman wrote: > > > On Tue, Nov 16, 2021 at 11:43:17AM -0800, Minchan Kim wrote: > > > > The kernfs implementation has big lock granularity(kernfs_rwsem) so > > > > every kernfs-based(e.g., sysfs, cgroup, dmabuf) fs are able to compete > > > > the lock. Thus, if one of userspace goes the sleep under holding > > > > the lock for a long time, rest of them should wait it. A example is > > > > the holder goes direct reclaim with the lock since it needs memory > > > > allocation. Let's fix it at common technique that release the lock > > > > and then allocate the memory. Fortunately, kernfs looks like have > > > > an refcount so I hope it's fine. > > > > > > > > Signed-off-by: Minchan Kim <minchan@kernel.org> > > > > --- > > > > fs/kernfs/dir.c | 14 +++++++++++--- > > > > fs/kernfs/inode.c | 2 +- > > > > fs/kernfs/kernfs-internal.h | 1 + > > > > 3 files changed, 13 insertions(+), 4 deletions(-) > > > > > > What workload hits this lock to cause it to be noticable? > > > > A app launching since it was dropping the frame since the > > latency was too long. > > How does running a program interact with kernfs filesystems? Which > one(s)?
A app launching involves dma_buf exports which creates kobject and add it to the kernfs with down_write - kernfs_add_one.
At the same time in other CPU, a random process was accessing sysfs and the kernfs_iop_lookup was already hoding the kernfs_rwsem and ran under direct reclaim patch due to alloc_inode in kerfs_get_inode.
Therefore, the app is stuck on the lock and lose frames so enduser sees the jank.
> > > > There was a bunch of recent work in this area to make this much more > > > fine-grained, and the theoritical benchmarks that people created (adding > > > 10s of thousands of scsi disks at boot time) have gotten better. > > > > > > But in that work, no one could find a real benchmark or use case that > > > anyone could even notice this type of thing. What do you have that > > > shows this? > > > > https://developer.android.com/studio/command-line/perfetto > > https://perfetto.dev/docs/data-sources/cpu-scheduling > > That is links to a tool, not a test we can run ourselves. > > Or how about the output of that tool? > > > Android has perfetto tracing system and can show where processes > > were stuck. This case was the lock since holder was in direct reclaim > > path. > > Reclaim of what? What is the interaction here with kernfs? Normally > this filesystem is not on any "fast paths" that I know of. > > More specifics would be nice :)
I hope it's enough above.
| |