lkml.org 
[lkml]   [2008]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Possible race between direct IO and JBD?
On Fri, 25 Apr 2008 16:38:23 -0700 Mingming Cao <cmm@us.ibm.com> wrote:

> Hi,
>
> While looking at a bug related to direct IO returns to EIO, after
> looking at the code, I found there is a window that
> try_to_free_buffers() from direct IO could race with JBD, which holds
> the reference to the data buffers before journal_commit_transaction()
> ensures the data buffers has reached to the disk.
>
> A little more detail: to prepare for direct IO, generic_file_direct_IO()
> calls invalidate_inode_pages2_range() to invalidate the pages in the
> cache before performaning direct IO. invalidate_inode_pages2_range()
> tries to free the buffers via try_to free_buffers(), but sometimes it
> can't, due to the buffers is possible still on some transaction's
> t_sync_datalist or t_locked_list waiting for
> journal_commit_transaction() to process it.
>
> Currently Direct IO simply returns EIO if try_to_free_buffers() finds
> the buffer is busy, as it has no clue that JBD is referencing it.
>
> Is this a known issue and expected behavior? Any thoughts?

Something like that might be possible, although people used to test
buffered-vs-direct fairly heavily.

generic_file_direct_IO() will run
filemap_write_and_wait()->filemap_fdatawrite() under i_mutex, and this
should run commits, write back dirty pages, etc.

There might remain races though, perhaps with page faults.


\
 
 \ /
  Last update: 2008-04-26 12:45    [W:0.168 / U:0.656 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site