lkml.org 
[lkml]   [2021]   [Jun]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v9 8/8] writeback, cgroup: release dying cgwbs by switching attached inodes
On Tue, Jun 08, 2021 at 10:34:34PM -0700, Andrew Morton wrote:
> On Wed, 9 Jun 2021 00:37:10 +0000 Dennis Zhou <dennis@kernel.org> wrote:
>
> > On Tue, Jun 08, 2021 at 05:23:34PM -0700, Roman Gushchin wrote:
> > > On Tue, Jun 08, 2021 at 05:12:37PM -0700, Andrew Morton wrote:
> > > > On Tue, 8 Jun 2021 16:02:25 -0700 Roman Gushchin <guro@fb.com> wrote:
> > > >
> > > > > Asynchronously try to release dying cgwbs by switching attached inodes
> > > > > to the nearest living ancestor wb. It helps to get rid of per-cgroup
> > > > > writeback structures themselves and of pinned memory and block cgroups,
> > > > > which are significantly larger structures (mostly due to large per-cpu
> > > > > statistics data). This prevents memory waste and helps to avoid
> > > > > different scalability problems caused by large piles of dying cgroups.
> > > > >
> > > > > Reuse the existing mechanism of inode switching used for foreign inode
> > > > > detection. To speed things up batch up to 115 inode switching in a
> > > > > single operation (the maximum number is selected so that the resulting
> > > > > struct inode_switch_wbs_context can fit into 1024 bytes). Because
> > > > > every switching consists of two steps divided by an RCU grace period,
> > > > > it would be too slow without batching. Please note that the whole
> > > > > batch counts as a single operation (when increasing/decreasing
> > > > > isw_nr_in_flight). This allows to keep umounting working (flush the
> > > > > switching queue), however prevents cleanups from consuming the whole
> > > > > switching quota and effectively blocking the frn switching.
> > > > >
> > > > > A cgwb cleanup operation can fail due to different reasons (e.g. not
> > > > > enough memory, the cgwb has an in-flight/pending io, an attached inode
> > > > > in a wrong state, etc). In this case the next scheduled cleanup will
> > > > > make a new attempt. An attempt is made each time a new cgwb is offlined
> > > > > (in other words a memcg and/or a blkcg is deleted by a user). In the
> > > > > future an additional attempt scheduled by a timer can be implemented.
> > > > >
> > > > > ...
> > > > >
> > > > > +/*
> > > > > + * Maximum inodes per isw. A specific value has been chosen to make
> > > > > + * struct inode_switch_wbs_context fit into 1024 bytes kmalloc.
> > > > > + */
> > > > > +#define WB_MAX_INODES_PER_ISW 115
> > > >
> > > > Can't we do 1024/sizeof(struct inode_switch_wbs_context)?
> > >
> > > It must be something like
> > > DIV_ROUND_DOWN_ULL(1024 - sizeof(struct inode_switch_wbs_context), sizeof(struct inode *)) + 1
> >
> > Sorry to keep popping in for 1 offs but maybe this instead? I think the
> > above would result in > 1024 kzalloc() call.
> >
> > DIV_ROUND_DOWN_ULL(max(1024 - sizeof(struct inode_switch_wbs_context), sizeof(struct inode *)),
> > sizeof(struct inode *))
> >
> > might need max_t not sure.
>
> Unclear to me why plain old division won't work, but whatever. Please
> figure it out? "115" is too sad to live!

You're totally right, plain division is fine here!
Please, squash the following chunk into the last commit in the series.

Thank you!

--

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 49b33300b1b8..545fce68e919 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -229,7 +229,8 @@ void wb_wait_for_completion(struct wb_completion *done)
* Maximum inodes per isw. A specific value has been chosen to make
* struct inode_switch_wbs_context fit into 1024 bytes kmalloc.
*/
-#define WB_MAX_INODES_PER_ISW 115
+#define WB_MAX_INODES_PER_ISW ((1024UL - sizeof(struct inode_switch_wbs_context)) \
+ / sizeof(struct inode *))

static atomic_t isw_nr_in_flight = ATOMIC_INIT(0);
static struct workqueue_struct *isw_wq;
\
 
 \ /
  Last update: 2021-06-09 21:55    [W:0.049 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site