Messages in this thread | | | From | OGAWA Hirofumi <> | Subject | Re: [PATCH] fs: fat: add check for dir size in fat_calc_dir_size | Date | Tue, 30 Jun 2020 20:08:12 +0900 |
| |
Anupam Aggarwal <anupam.al@samsung.com> writes:
> Max directory size of FAT filesystem is FAT_MAX_DIR_SIZE(2097152 bytes) > It is possible that, due to corruption, directory size calculated in > fat_calc_dir_size() can be greater than FAT_MAX_DIR_SIZE, i.e. > can be in GBs, hence directory traversal can take long time. > for example when command "ls -lR" is executed on corrupted FAT > formatted USB, fat_search_long() function will lookup for a filename from > position 0 till end of corrupted directory size, multiple such lookups > will lead to long directory traversal > > Added sanity check for directory size fat_calc_dir_size(), > and return EIO error, which will prevent lookup in corrupted directory > > Signed-off-by: Anupam Aggarwal <anupam.al@samsung.com> > Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com>
There are many implementation that doesn't follow the spec strictly. And when I tested in past, Windows also allowed to read the directory beyond that limit. I can't recall though if there is in real case or just test case though.
So if there is no strong reason to apply the limit, I don't think it is good to limit it. (btw, the current code should detect the corruption of infinite loop already)
Thanks.
> --- > fs/fat/inode.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/fs/fat/inode.c b/fs/fat/inode.c > index a0cf99d..9b2e81e 100644 > --- a/fs/fat/inode.c > +++ b/fs/fat/inode.c > @@ -490,6 +490,13 @@ static int fat_calc_dir_size(struct inode *inode) > return ret; > inode->i_size = (fclus + 1) << sbi->cluster_bits; > > + if (i_size_read(inode) > FAT_MAX_DIR_SIZE) { > + fat_fs_error(inode->i_sb, > + "%s corrupted directory (invalid size %lld)\n", > + __func__, i_size_read(inode)); > + return -EIO; > + } > + > return 0; > }
-- OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
| |