lkml.org 
[lkml]   [2020]   [May]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] f2fs: change maximum zstd compression buffer size
Date
From: Daeho Jeong <daehojeong@google.com>

Current zstd compression buffer size is one page and header size less
than cluster size. By this, zstd compression always succeeds even if
the real compression data is failed to fit into the buffer size, and
eventually reading the cluster returns I/O error with the corrupted
compression data.

Signed-off-by: Daeho Jeong <daehojeong@google.com>
---
fs/f2fs/compress.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index 4c7eaeee52336..a9fa8049b295f 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
cc->private = workspace;
cc->private2 = stream;

- cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
+ cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
return 0;
}

@@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
ZSTD_inBuffer inbuf;
ZSTD_outBuffer outbuf;
int src_size = cc->rlen;
- int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
+ int dst_size = cc->clen;
int ret;

inbuf.pos = 0;
--
2.26.2.526.g744177e7f7-goog
\
 
 \ /
  Last update: 2020-05-04 16:31    [W:0.076 / U:0.992 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site