lkml.org 
[lkml]   [2020]   [May]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size
From
Date
On 2020-5-4 22:30, Jaegeuk Kim wrote:
> From: Daeho Jeong <daehojeong@google.com>
>
> Current zstd compression buffer size is one page and header size less
> than cluster size. By this, zstd compression always succeeds even if
> the real compression data is failed to fit into the buffer size, and
> eventually reading the cluster returns I/O error with the corrupted
> compression data.

What's the root cause of this issue? I didn't get it.

>
> Signed-off-by: Daeho Jeong <daehojeong@google.com>
> ---
> fs/f2fs/compress.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> index 4c7eaeee52336..a9fa8049b295f 100644
> --- a/fs/f2fs/compress.c
> +++ b/fs/f2fs/compress.c
> @@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
> cc->private = workspace;
> cc->private2 = stream;
>
> - cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> + cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);

In my machine, the value is 66572 which is much larger than size of dst buffer,
so, in where we can tell the real size of dst buffer to zstd compressor?
Otherwise, if compressed data size is larger than dst buffer size, when we flush
compressed data into dst buffer, we may suffer overflow.

> return 0;
> }
>
> @@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
> ZSTD_inBuffer inbuf;
> ZSTD_outBuffer outbuf;
> int src_size = cc->rlen;
> - int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> + int dst_size = cc->clen;
> int ret;
>
> inbuf.pos = 0;
>

\
 
 \ /
  Last update: 2020-05-05 03:53    [W:0.289 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site