[Gluster-users] "file changed as we read it" in gluster 3.7.4

Krutika Dhananjay kdhananj at redhat.com
Tue Sep 22 14:56:39 UTC 2015


----- Original Message -----

> From: hmlth at t-hamel.fr
> To: "Krutika Dhananjay" <kdhananj at redhat.com>
> Cc: gluster-users at gluster.org
> Sent: Tuesday, September 22, 2015 6:09:52 PM
> Subject: Re: [Gluster-users] "file changed as we read it" in gluster 3.7.4

> On 2015-09-22 02:59, Krutika Dhananjay wrote:
> > -------------------------
> >
> >> FROM: hmlth at t-hamel.fr
> >> Thank you, this solved the issue (after a umount/mount). The
> >> question
> >> now is: what's the catch? Why is this not the default?
> >>
> >> https://partner-bugzilla.redhat.com/show_bug.cgi?id=1203122
> >>
> >> The above link makes me think that there is a problem with
> >> "readdirp"
> >> performances but I'm not sure if the impact is serious or not.
> >
> > That's right. Enabling the option can slow down readdirp operations,
> > which is why it is disabled by default.

> Is there a list of options where this tradeoff is being made? Disabling
> consistency for performance is not what I was expecting by default.

The rule consistent-metadata enforces is to always fetch attributes (in readdirp) from the _same_ brick in the replica set as long as it holds a good copy. 
What this means is that _even_ if all replicas contain good copies of the file/directory, the replicate module will still fetch attributes from the default read child of the file/directory. 
In fact the precise reason why consistent-metadata was introduced was to fix the tar bug you talk about. The reason for this is that due to clock skew, the replicas of a file will not have the same {c,m,a}time. 
So enabling consistent-metadata would make sure that the attributes (specifically the ctime which tar program uses to determine whether a file changed while it was being archived) are consistently fetched from the same brick in the replica set. 
But for this, there is no other issue consistent-metadata solves. The rest of the attributes would continue to be served from the good copy of the file in readdirp even if consistent-metadata is off. 

The other option you have is to use the command line option ' --warning=no-file-changed ' while invoking the tar command to suppress these messages. 

-Krutika 

> Regards

> Thomas HAMEL

> >>
> >> On 2015-09-21 16:14, Krutika Dhananjay wrote:
> >>> Could you set 'cluster.consistent-metadata' to 'on' and try the
> >> test
> >>> again?
> >>>
> >>> #gluster volume set <VOL> cluster.consistent-metadata on
> >>>
> >>> -Krutika
> >>>
> >>> -------------------------
> >>>
> >>>> FROM: hmlth at t-hamel.fr
> >>>> TO: gluster-users at gluster.org
> >>>> SENT: Monday, September 21, 2015 7:10:59 PM
> >>>> SUBJECT: [Gluster-users] "file changed as we read it" in gluster
> >>>> 3.7.4
> >>>>
> >>>> Hello,
> >>>>
> >>>> I'm evaluating gluster on Debian, I installed the version 3.7.4
> >> and
> >>>> I
> >>>> see this kind of error messages when I run tar:
> >>>>
> >>>> # tar c linux-3.16.7-ckt11/ > /dev/null
> >>>> tar: linux-3.16.7-ckt11/sound/soc: file changed as we read it
> >>>> tar: linux-3.16.7-ckt11/net: file changed as we read it
> >>>> tar: linux-3.16.7-ckt11/Documentation/devicetree/bindings: file
> >>>> changed
> >>>> as we read it
> >>>> tar: linux-3.16.7-ckt11/Documentation: file changed as we read it
> >>>> tar: linux-3.16.7-ckt11/tools/perf: file changed as we read it
> >>>> tar: linux-3.16.7-ckt11/include/uapi/linux: file changed as we
> >> read
> >>>> it
> >>>> tar: linux-3.16.7-ckt11/arch/powerpc: file changed as we read it
> >>>> tar: linux-3.16.7-ckt11/arch/blackfin: file changed as we read it
> >>>> tar: linux-3.16.7-ckt11/arch/arm/boot/dts: file changed as we
> >> read
> >>>> it
> >>>> tar: linux-3.16.7-ckt11/arch/arm: file changed as we read it
> >>>> tar: linux-3.16.7-ckt11/drivers/media: file changed as we read it
> >>>> tar: linux-3.16.7-ckt11/drivers/staging: file changed as we read
> >> it
> >>>> #
> >>>>
> >>>> I saw this problem was discussed here earlier but I was under the
> >>>> impression it was resolved on the 3.5 series. Is the fix in the
> >> 3.7
> >>>> branch?
> >>>>
> >>>> My volume configuration:
> >>>>
> >>>> # gluster volume info glustervol1
> >>>>
> >>>> Volume Name: glustervol1
> >>>> Type: Replicate
> >>>> Volume ID: 71ce34f2-28da-4674-91c9-b19a2b791aef
> >>>> Status: Started
> >>>> Number of Bricks: 1 x 3 = 3
> >>>> Transport-type: tcp
> >>>> Bricks:
> >>>> Brick1: n1:/glusterfs/n1-2/brick
> >>>> Brick2: n2:/glusterfs/n2-2/brick
> >>>> Brick3: n3:/glusterfs/n3-2/brick
> >>>> Options Reconfigured:
> >>>> performance.readdir-ahead: on
> >>>> cluster.server-quorum-ratio: 51
> >>>>
> >>>> Regards
> >>>>
> >>>> Thomas HAMEL
> >>>> _______________________________________________
> >>>> Gluster-users mailing list
> >>>> Gluster-users at gluster.org
> >>>> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150922/8ed7c9de/attachment.html>


More information about the Gluster-users mailing list