lkml.org 
[lkml]   [2020]   [May]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[RFCv4 0/6] Improve ext4 handling of ENOSPC with multi-threaded use-case
Date
Hello All,

v3 -> v4:
1. Splitted code cleanups and debug improvements as a separate patch series.
2. Dropped rcu_barrier() approach since it did cause some latency
in my testing of ENOSPC handling.
3. This patch series takes a different approach to improve the multi-threaded
ENOSPC handling in ext4 mballoc code. Below mail gives more details.


Background
==========
Consider a case where your disk is close to full but still enough space
remains for your multi-threaded application to run. Now when this application
threads tries to write (e.g. sparse file followed by mmap write or
even fallocate multiple files) in parallel, then with current code of
ext4 multi-block allocator, the application may get an ENOSPC error in some
cases. Examining disk space at this time, we see there is sufficient space
remaining for your application to continue to run.

Additional info:
============================
1. Our internal test team was easily able to reproduce this ENOSPC error on
an upstream kernel with 2GB ext4 image, with 64K blocksize. They didn't try
above 2GB and reprorted this issue directly to dev team. On examining the
free space when the application gets ENOSPC, the free space left was more
then 50% of filesystem size in some cases.

2. For debugging/development of these patches, I used below script [1] to
trigger this issue quite frequently on a 64K blocksize setup with 240MB
ext4 image.


Summary of patches and problem with current design
==================================================
There were 3 main problems which these patches tries to address and hence
improve the ENOSPC handling in ext4's multi-block allocator code.

1. Patch-2: Earlier we were considering the group is good or not (means
checking if it has enough free blocks to serve your request) without taking
the group's lock. This could result into a race where, if another thread is
discarding the group's prealloc list, then the allocation thread will not
consider those about to be free blocks and will fail will return that group
is not fit for allocation thus eventually fails with ENOSPC error.

2. Patch-4: Discard PA algoritm only scans the PA list to free up the
additional blocks which got added to PA. This is done by the same thread-A
which at 1st couldn't allocate any blocks. But there is a window where,
once the blocks were allocated (say by some other thread-B previously) we
drop the group's lock and then checks to see if some of these blocks could
be added to prealloc list of the group from where we allocated some blocks.
After that we take the lock and add these additional blocks allocated by
thread-B to the PA list. But say if thread-A tries to scan the PA list
between this time interval then there is possibilty that it won't find any
blocks added to the PA list and hence may return ENOSPC error.
Hence this patch tries to add those additional blocks to the PA list just
after the blocks are marked as used with the same group's spinlock held.

3. Patch-3: Introduces a per cpu discard_pa_seq counter which is increased
whenever there is block allocation/freeing or when the discarding of any
group's PA list has started. With this we could know when to stop the
retrying logic and return ENOSPC error if there is actually no free space
left.
There is an optimization done in the block allocation fast path with this
approach that, before starting the block allocation, we only sample the
percpu seq count on that cpu. Only when the allocation fails and discard
couldn't free up any of the blocks in all of the group's PA list, that is
when we sample the percpu seq counter sum over all possible cpus to check
if we need to retry.


Testing:
=========
Tested fstests with default bs of 4K and bs == PAGESIZE ("-g auto")
No new failures were reported with this patch series in this testing.

NOTE:
1. This patch series is based on top of mballoc code cleanup patch series
posted at [2].
2. Patch-2 & Patch-3 is intentionally kept separate for better reviewer's
attention on what each patch is trying to address.

References:
===========
[v3]: https://patchwork.ozlabs.org/project/linux-ext4/cover/cover.1588313626.git.riteshh@linux.ibm.com/
[1]: https://github.com/riteshharjani/LinuxStudy/blob/master/tools/test_mballoc.sh
[2]: https://patchwork.ozlabs.org/project/linux-ext4/cover/cover.1589086800.git.riteshh@linux.ibm.com/


Ritesh Harjani (6):
ext4: mballoc: Refactor ext4_mb_good_group()
ext4: mballoc: Use ext4_lock_group() around calculations involving bb_free
ext4: mballoc: Optimize ext4_mb_good_group_nolock further if grp needs init
ext4: mballoc: Add blocks to PA list under same spinlock after allocating blocks
ext4: mballoc: Refactor ext4_mb_discard_preallocations()
ext4: mballoc: Introduce pcpu seqcnt for freeing PA to improve ENOSPC handling

fs/ext4/mballoc.c | 249 +++++++++++++++++++++++++++++++++-------------
1 file changed, 179 insertions(+), 70 deletions(-)

--
2.21.0

\
 
 \ /
  Last update: 2020-05-10 09:10    [W:0.822 / U:0.668 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site