[Gluster-devel] Wrong behavior on fsync of md-cache ?

Raghavendra Gowdappa rgowdapp at redhat.com
Mon Nov 24 17:53:32 UTC 2014



----- Original Message -----
> From: "Xavier Hernandez" <xhernandez at datalab.es>
> To: "Gluster Devel" <gluster-devel at gluster.org>, "Raghavendra Gowdappa" <rgowdapp at redhat.com>
> Cc: "Emmanuel Dreyfus" <manu at netbsd.org>
> Sent: Monday, November 24, 2014 11:05:57 PM
> Subject: Wrong behavior on fsync of md-cache ?
> 
> Hi,
> 
> I have an issue in ec caused by what seems an incorrect behavior in
> md-cache, at least in NetBSD (on linux this doesn't seem to happen).
> 
> The problem happens when multiple writes are sent in parallel and one of
> them fails with an error. After the error, an fsync is issued, before
> all pending writes are completed. The problem is that this fsync request
> is not propagated through the xlator stack: md-cache automatically
> answers it with the same error code returned by the last write, but it
> does not wait for all pending writes to finish.

Are you sure that fsync is short-circuited in md-cache. Looking at mdc_fsync I can see that fsync is wound down the xlator stack unconditionally. write-behind flushes all pending writes before fsync is wound down the xlator stack. Are you sure fsync is sent by kernel to glusterfs? May be because of a stale stat information kernel never issues fsync? You can load a debug/trace xlator just above io-stats and check whether you get fsync call (you can also dump fuse to glusterfs traffic using --dump-fuse-path=<logfile>, but its a bingary file and you need a parser to parse that binary data).

> 
> As I see it, if there are pending writes when an fsync is received, an
> xlator should wait until all these writes have completed.
> 
> Setting performance.stat-prefetch to off, the problem disappears (ec
> receives fsync and waits for all pending writes to finish before returning).
> 
> Is this a bug or I'm missing something ?
> 
> Thanks,
> 
> Xavi
> 


More information about the Gluster-devel mailing list