lkml.org 
[lkml]   [2019]   [Dec]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [Ocfs2-devel] [PATCH v3] ocfs2: call journal flush to mark journal as empty after journal recovery when mount
    From
    Date


    On 12/20/19 8:52 PM, Jiangyiwen wrote:
    > On 2019/12/20 17:33, Likai wrote:
    >> On 2019/12/20 17:14, piaojun wrote:
    >>> On 2019/12/20 9:11, Joseph Qi wrote:
    >>>> On 19/12/19 22:15, piaojun wrote:
    >>>>> On 2019/12/17 10:01, Kai Li wrote:
    >>>>>> If journal is dirty when mount, it will be replayed but jbd2 sb
    >>>>>> log tail cannot be updated to mark a new start because
    >>>>>> journal->j_flag has already been set with JBD2_ABORT first
    >>>>>> in journal_init_common. When a new transaction is committed, it
    >>>>>> will be recored in block 1 first(journal->j_tail is set to 1 in
    >>>>>> journal_reset).If emergency restart happens again before journal
    >>>>>> super block is updated unfortunately, the new recorded trans will
    >>>>>> not be replayed in the next mount.
    >>>>>>
    >>>>>> The following steps describe this procedure in detail.
    >>>>>> 1. mount and touch some files
    >>>>>> 2. these transactions are committed to journal area but not
    >>>>>> checkpointed
    >>>>>> 3. emergency restart
    >>>>>> 4. mount again and its journals are replayed
    >>>>>> 5. journal super block's first s_start is 1, but its s_seq is not
    >>>>>> updated
    >>>>>> 6. touch a new file and its trans is committed but not checkpointed
    >
    > Hi,
    >
    > I wonder that in this step, does the function
    > jbd2_journal_commit_transaction() return?
    > I understand only jbd2_journal_commit_transaction() return it means the
    > transaction
    > is committed completely, and the s_seq can be updated in
    > jbd2_journal_commit_transaction()->jbd2_update_log_tail(), so I don't
    > know how
    > this scenario happened.
    >
    > If emergency restart happens in before jbd2_journal_commit_transaction()

    The trick is that jbd2 kernel thread even doesn't have a chance to to
    update tail. Assuming that a timepoint sits between *mount completion*
    and the *first iteration* of committing work that jbd2 thread.

    -Changwei

    > return,
    > I think this transaction shouldn't be a valid transaction.
    >
    > Can you explain where the specific function is executed?
    >
    > Thanks,
    > Yiwen.
    >
    >>>>>> 7. emergency restart again
    >>>>>> 8. mount and journal is dirty, but trans committed in 6 will not be
    >>>>>> replayed.
    >>>>>>
    >>>>>> This exception happens easily when this lun is used by only one
    >>>>>> node. If it
    >>>>>> is used by multi-nodes, other node will replay its journal and its
    >>>>>> journal super block will be updated after recovery like what this
    >>>>>> patch
    >>>>>> does.
    >>>>>>
    >>>>>> ocfs2_recover_node->ocfs2_replay_journal.
    >>>>>>
    >>>>>> The following jbd2 journal can be generated by touching a new file
    >>>>>> after
    >>>>>> journal is replayed, and seq 15 is the first valid commit, but
    >>>>>> first seq
    >>>>>> is 13 in journal super block.
    >>>>>> logdump:
    >>>>>> Block 0: Journal Superblock
    >>>>>> Seq: 0   Type: 4 (JBD2_SUPERBLOCK_V2)
    >>>>>> Blocksize: 4096   Total Blocks: 32768   First Block: 1
    >>>>>> First Commit ID: 13   Start Log Blknum: 1
    >>>>>> Error: 0
    >>>>>> Feature Compat: 0
    >>>>>> Feature Incompat: 2 block64
    >>>>>> Feature RO compat: 0
    >>>>>> Journal UUID: 4ED3822C54294467A4F8E87D2BA4BC36
    >>>>>> FS Share Cnt: 1   Dynamic Superblk Blknum: 0
    >>>>>> Per Txn Block Limit    Journal: 0    Data: 0
    >>>>>>
    >>>>>> Block 1: Journal Commit Block
    >>>>>> Seq: 14   Type: 2 (JBD2_COMMIT_BLOCK)
    >>>>>>
    >>>>>> Block 2: Journal Descriptor
    >>>>>> Seq: 15   Type: 1 (JBD2_DESCRIPTOR_BLOCK)
    >>>>>> No. Blocknum        Flags
    >>>>>>   0. 587             none
    >>>>>> UUID: 00000000000000000000000000000000
    >>>>>>   1. 8257792         JBD2_FLAG_SAME_UUID
    >>>>>>   2. 619             JBD2_FLAG_SAME_UUID
    >>>>>>   3. 24772864        JBD2_FLAG_SAME_UUID
    >>>>>>   4. 8257802         JBD2_FLAG_SAME_UUID
    >>>>>>   5. 513             JBD2_FLAG_SAME_UUID JBD2_FLAG_LAST_TAG
    >>>>>> ...
    >>>>>> Block 7: Inode
    >>>>>> Inode: 8257802   Mode: 0640   Generation: 57157641 (0x3682809)
    >>>>>> FS Generation: 2839773110 (0xa9437fb6)
    >>>>>> CRC32: 00000000   ECC: 0000
    >>>>>> Type: Regular   Attr: 0x0   Flags: Valid
    >>>>>> Dynamic Features: (0x1) InlineData
    >>>>>> User: 0 (root)   Group: 0 (root)   Size: 7
    >>>>>> Links: 1   Clusters: 0
    >>>>>> ctime: 0x5de5d870 0x11104c61 -- Tue Dec  3 11:37:20.286280801 2019
    >>>>>> atime: 0x5de5d870 0x113181a1 -- Tue Dec  3 11:37:20.288457121 2019
    >>>>>> mtime: 0x5de5d870 0x11104c61 -- Tue Dec  3 11:37:20.286280801 2019
    >>>>>> dtime: 0x0 -- Thu Jan  1 08:00:00 1970
    >>>>>> ...
    >>>>>> Block 9: Journal Commit Block
    >>>>>> Seq: 15   Type: 2 (JBD2_COMMIT_BLOCK)
    >>>>>>
    >>>>>> The following is jouranl recovery log when recovering the upper jbd2
    >>>>>> journal when mount again.
    >>>>>> syslog:
    >>>>>> [ 2265.648622] ocfs2: File system on device (252,1) was not
    >>>>>> unmounted cleanly, recovering it.
    >>>>>> [ 2265.649695] fs/jbd2/recovery.c:(do_one_pass, 449): Starting
    >>>>>> recovery pass 0
    >>>>>> [ 2265.650407] fs/jbd2/recovery.c:(do_one_pass, 449): Starting
    >>>>>> recovery pass 1
    >>>>>> [ 2265.650409] fs/jbd2/recovery.c:(do_one_pass, 449): Starting
    >>>>>> recovery pass 2
    >>>>>> [ 2265.650410] fs/jbd2/recovery.c:(jbd2_journal_recover, 278):
    >>>>>> JBD2: recovery, exit status 0, recovered transactions 13 to 13
    >>>>>>
    >>>>>> Due to first commit seq 13 recorded in journal super is not
    >>>>>> consistent
    >>>>>> with the value recorded in block 1(seq is 14), journal recovery
    >>>>>> will be
    >>>>>> terminated before seq 15 even though it is an unbroken commit, inode
    >>>>>> 8257802 is a new file and it will be lost.
    >>>>>>
    >>>>>> Signed-off-by: Kai Li <li.kai4@h3c.com>
    >>>>>> ---
    >>>>>>   fs/ocfs2/journal.c | 8 ++++++++
    >>>>>>   1 file changed, 8 insertions(+)
    >>>>>>
    >>>>>> diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
    >>>>>> index 1afe57f425a0..68ba354cf361 100644
    >>>>>> --- a/fs/ocfs2/journal.c
    >>>>>> +++ b/fs/ocfs2/journal.c
    >>>>>> @@ -1066,6 +1066,14 @@ int ocfs2_journal_load(struct ocfs2_journal
    >>>>>> *journal, int local, int replayed)
    >>>>>>       ocfs2_clear_journal_error(osb->sb, journal->j_journal,
    >>>>>> osb->slot_num);
    >>>>>> +    if (replayed) {
    >>>>>> +        jbd2_journal_lock_updates(journal->j_journal);
    >>>>>> +        status = jbd2_journal_flush(journal->j_journal);
    >>>>> What if jbd2_journal_flush gets failed? The 's_sequence' and 's_start'
    >>>>> won't be reset, and I wonder if the problem still remains.
    >>>>>
    >>>> Yes, but we don't want this to fail the mount process, instead we
    >>>> just log
    >>>> an error and system administrator should know the result.
    >>>>
    >>> Thanks for your reply and I have another question about this issue. IMO
    >>> the second trans is not complete as jbd2 sb has not been updated, so we
    >>> do not need to replay it when mount again.
    >>>
    >>> Jun
    >>>
    >>>
    >> I don't think so. The problem is that jbd2 sb should be updated to mark
    >> a new start after mount rather than whether trans committed later is
    >> complete or not.
    >>
    >> In fact , the trans is complete too as the commit log described.
    >>
    >> Thanks
    >>
    >>
    >> _______________________________________________
    >> Ocfs2-devel mailing list
    >> Ocfs2-devel@oss.oracle.com
    >> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
    >>
    >> .
    >>
    >

    \
     
     \ /
      Last update: 2019-12-21 08:27    [W:6.629 / U:0.004 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site