[Gluster-devel] [bug?][read-ahead] when cache is invalid, should destroy all cache in the same inode_t?
Anand Avati
avati at zresearch.com
Wed Dec 19 05:47:17 UTC 2007
>
>
> now the cache is bind to fd_t, and when there is a write command, only
> the cache in this fd_t is mark dirty. I think this scheme has problem
> in the real world.
This problem exists with fd's on two client machines as well (where flushing
on inode does not help). Applications which are concerned about such fine
grained concurrent access should hold record locks and operate, during which
data is always freshly fetched.
Consider there are process A and B open the same file, A write data to
> a file and B read data from that file after A notify B the data is
> ready. In this case, A and B have different fd_t, and A's write
> operation will not trigger B's cache dirty. so B read the dirty data
> from cache. What do you think? Or did I make any mistake on this
> problem?
>
> some operations in read-ahead is affected by this issue, including (at
> least) ra_writev, ra_truncate, ra_ftruncate.
>
> write-behind maybe affected by the same issue, but I am not very clear
> on this. at least wb_truncate, wb_stat, wb_utimens has a bug, it
> should trigger the wb_sync for all the fd_t, not only the first fd_t.
the same reasoning applies for these cases as well.
avati
--
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.
More information about the Gluster-devel
mailing list