Messages in this thread | | | From | Chao Yu <> | Subject | RE: [f2fs-dev] [PATCH 1/2] f2fs: refactor shrink flow for extent cache | Date | Thu, 2 Jul 2015 20:37:11 +0800 |
| |
Hi Jaegeuk,
> -----Original Message----- > From: Jaegeuk Kim [mailto:jaegeuk@kernel.org] > Sent: Wednesday, July 01, 2015 9:26 AM > To: Chao Yu; Chao Yu > Cc: linux-kernel@vger.kernel.org; linux-f2fs-devel@lists.sourceforge.net; > linux-kernel@vger.kernel.org; linux-f2fs-devel@lists.sourceforge.net > Subject: Re: [f2fs-dev] [PATCH 1/2] f2fs: refactor shrink flow for extent cache > > Hi Chao, > > On Tue, Jun 30, 2015 at 06:42:09PM +0800, Chao Yu wrote: > > For now, in extent cache, we have a global lru list which links all extent > > node in the cache, and the list is protected by a global spinlock. > > > > If we want to shrink extent cache, we will: > > 1. delete all target extent node from global lru list under spinlock; > > 2. traverse all per-inode extent tree in global radix tree; > > 2.a. traverse all extent node in per-inode extent tree, try to free extent > > node if it is not in global lru list already. > > > > This method is inefficient when there is huge number of inode extent tree in > > global extent tree. > > > > In this patch we introduce a new method for extent cache shrinking: > > When we attach a new extent node, we record extent tree pointer in extent node. > > In shrink flow, we can try to find and lock extent tree of inode directly by > > this backward pointer, and then detach the extent node from extent tree. > > > > This can help to shrink extent cache more efficiently. > > Yes, but as we discussed before, this way will consume 4 bytes per each > extent_node. Can it be acceptable?
Yes, this method will increase memory overhead obviously.
Maybe there is a better way to reduce lock contention and block time of shrinker maked. I will rethink about it.
What I think now is I should firstly tests on our new shrinker with extreme case to see how bad it shows.
> > Instead, IMO, we need to focus on how to increase its hit ratio first. > Actually, I wrote a patch for that. > Could you check that first?
OK
Thanks,
> > Thanks,
| |