lkml.org 
[lkml]   [2022]   [May]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [f2fs-dev] [PATCH v3] f2fs: change the current atomic write way
From
On 2022/5/24 2:03, Jaegeuk Kim wrote:
> On 05/22, Chao Yu wrote:
>> On 2022/4/29 2:18, Daeho Jeong wrote:> + *old_addr = dn.data_blkaddr;
>>> + f2fs_truncate_data_blocks_range(&dn, 1);
>>> + dec_valid_block_count(sbi, F2FS_I(inode)->cow_inode, count);
>>> + inc_valid_block_count(sbi, inode, &count);
>>> + f2fs_replace_block(sbi, &dn, dn.data_blkaddr, new_addr,
>>> + ni.version, true, false);
>>
>> My concern is, if cow_inode's data was persisted into previous checkpoint,
>> and then f2fs_replace_block() will update SSA from cow_inode to inode?
>
> SSA for original file is intact, so we'll see the orignal file's block addresses
> and SSA, if we flush cow_inode's SSA after committing the atomic writes?
> It'd be good to flush any SSA for cow_inode, since we'll truncate
> cow_inode after powercut by the ohphan recovery?

I think it's safe for recovery flow, but before that, fsck will report inconsistent
status during checking orphan atomic_write inode.

Thanks,

>
>> it will cause inconsistent status of last valid checkpoint? Or am I mssing
>> something?
>>
>>> - f2fs_submit_merged_write_cond(sbi, inode, NULL, 0, DATA);
>>> + new = f2fs_kmem_cache_alloc(revoke_entry_slab, GFP_NOFS,
>>> + true, NULL);
>>> + if (!new) {
>>> + f2fs_put_dnode(&dn);
>>> + ret = -ENOMEM;
>>> + goto out;
>>
>> It doesn't need to handle failure of f2fs_kmem_cache_alloc()
>> due to nofail parameter is true.
>>
>> Thanks,
>>
>>
>> _______________________________________________
>> Linux-f2fs-devel mailing list
>> Linux-f2fs-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

\
 
 \ /
  Last update: 2022-05-24 08:30    [W:0.071 / U:0.232 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site