[Gluster-users] "file changed as we read it" message during tar file creation on GlusterFS

Ravishankar N ravishankar at redhat.com
Tue Jan 2 12:59:37 UTC 2018



On 01/02/2018 06:22 PM, Mauro Tridici wrote:
>
> Hi Ravi,
>
> thank you very much for your support and explanation.
> If I understand, the ctime xlator feature is not present in the 
> current gluster package but it will be in the future release, right?

That is right Mauro.
-Ravi
>
> Thank you again,
> Mauro
>
>> Il giorno 02 gen 2018, alle ore 12:53, Ravishankar N 
>> <ravishankar at redhat.com <mailto:ravishankar at redhat.com>> ha scritto:
>>
>> I think it is safe to ignore it. The problem exists  due to the minor 
>> difference in file time stamps in the backend bricks of the same sub 
>> volume (for a given file) and during the course of tar, the timestamp 
>> can be served from different bricks causing it to complain . The 
>> ctime xlator[1] feature once ready should fix this issue by storing 
>> time stamps as xattrs on the bricks. i.e. all bricks will have the 
>> same value.
>>
>> Hope this helps.
>> Ravi
>>
>> [1] https://github.com/gluster/glusterfs/issues/208
>>
>>
>>
>> On 01/02/2018 04:13 PM, Mauro Tridici wrote:
>>> Hi All,
>>>
>>> any news about this issue?
>>> Can I ignore this kind of error message or I have to do something to 
>>> correct it?
>>>
>>> Thank you in advance and sorry for my insistence.
>>> Regards,
>>> Mauro
>>>
>>>> Il giorno 29 dic 2017, alle ore 11:45, Mauro Tridici 
>>>> <mauro.tridici at cmcc.it <mailto:mauro.tridici at cmcc.it>> ha scritto:
>>>>
>>>>
>>>> Hi Nithya,
>>>>
>>>> thank you very much for your support and sorry for the late.
>>>> Below you can find the output of “gluster volume info tier2” 
>>>> command and the gluster software stack version:
>>>>
>>>> gluster volume info
>>>>
>>>> Volume Name: tier2
>>>> Type: Distributed-Disperse
>>>> Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 6 x (4 + 2) = 36
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: s01-stg:/gluster/mnt1/brick
>>>> Brick2: s02-stg:/gluster/mnt1/brick
>>>> Brick3: s03-stg:/gluster/mnt1/brick
>>>> Brick4: s01-stg:/gluster/mnt2/brick
>>>> Brick5: s02-stg:/gluster/mnt2/brick
>>>> Brick6: s03-stg:/gluster/mnt2/brick
>>>> Brick7: s01-stg:/gluster/mnt3/brick
>>>> Brick8: s02-stg:/gluster/mnt3/brick
>>>> Brick9: s03-stg:/gluster/mnt3/brick
>>>> Brick10: s01-stg:/gluster/mnt4/brick
>>>> Brick11: s02-stg:/gluster/mnt4/brick
>>>> Brick12: s03-stg:/gluster/mnt4/brick
>>>> Brick13: s01-stg:/gluster/mnt5/brick
>>>> Brick14: s02-stg:/gluster/mnt5/brick
>>>> Brick15: s03-stg:/gluster/mnt5/brick
>>>> Brick16: s01-stg:/gluster/mnt6/brick
>>>> Brick17: s02-stg:/gluster/mnt6/brick
>>>> Brick18: s03-stg:/gluster/mnt6/brick
>>>> Brick19: s01-stg:/gluster/mnt7/brick
>>>> Brick20: s02-stg:/gluster/mnt7/brick
>>>> Brick21: s03-stg:/gluster/mnt7/brick
>>>> Brick22: s01-stg:/gluster/mnt8/brick
>>>> Brick23: s02-stg:/gluster/mnt8/brick
>>>> Brick24: s03-stg:/gluster/mnt8/brick
>>>> Brick25: s01-stg:/gluster/mnt9/brick
>>>> Brick26: s02-stg:/gluster/mnt9/brick
>>>> Brick27: s03-stg:/gluster/mnt9/brick
>>>> Brick28: s01-stg:/gluster/mnt10/brick
>>>> Brick29: s02-stg:/gluster/mnt10/brick
>>>> Brick30: s03-stg:/gluster/mnt10/brick
>>>> Brick31: s01-stg:/gluster/mnt11/brick
>>>> Brick32: s02-stg:/gluster/mnt11/brick
>>>> Brick33: s03-stg:/gluster/mnt11/brick
>>>> Brick34: s01-stg:/gluster/mnt12/brick
>>>> Brick35: s02-stg:/gluster/mnt12/brick
>>>> Brick36: s03-stg:/gluster/mnt12/brick
>>>> Options Reconfigured:
>>>> features.scrub: Active
>>>> features.bitrot: on
>>>> features.inode-quota: on
>>>> features.quota: on
>>>> performance.client-io-threads: on
>>>> cluster.min-free-disk: 10
>>>> cluster.quorum-type: auto
>>>> transport.address-family: inet
>>>> nfs.disable: on
>>>> server.event-threads: 4
>>>> client.event-threads: 4
>>>> cluster.lookup-optimize: on
>>>> performance.readdir-ahead: on
>>>> performance.parallel-readdir: off
>>>> cluster.readdir-optimize: on
>>>> features.cache-invalidation: on
>>>> features.cache-invalidation-timeout: 600
>>>> performance.stat-prefetch: on
>>>> performance.cache-invalidation: on
>>>> performance.md-cache-timeout: 600
>>>> network.inode-lru-limit: 50000
>>>> performance.io <http://performance.io/>-cache: off
>>>> disperse.cpu-extensions: auto
>>>> performance.io <http://performance.io/>-thread-count: 16
>>>> features.quota-deem-statfs: on
>>>> features.default-soft-limit: 90
>>>> cluster.server-quorum-type: server
>>>> cluster.brick-multiplex: on
>>>> cluster.server-quorum-ratio: 51%
>>>>
>>>> [root at s01 ~]# rpm -qa|grep gluster
>>>> python2-*gluster*-3.10.5-1.el7.x86_64
>>>> *gluster*fs-geo-replication-3.10.5-1.el7.x86_64
>>>> centos-release-*gluster*310-1.0-1.el7.centos.noarch
>>>> *gluster*fs-server-3.10.5-1.el7.x86_64
>>>> *gluster*fs-libs-3.10.5-1.el7.x86_64
>>>> *gluster*fs-api-3.10.5-1.el7.x86_64
>>>> vdsm-*gluster*-4.19.31-1.el7.centos.noarch
>>>> *gluster*fs-3.10.5-1.el7.x86_64
>>>> *gluster*-nagios-common-1.1.0-0.el7.centos.noarch
>>>> *gluster*fs-cli-3.10.5-1.el7.x86_64
>>>> *gluster*fs-client-xlators-3.10.5-1.el7.x86_64
>>>> *gluster*-nagios-addons-1.1.0-0.el7.centos.x86_64
>>>> *gluster*fs-fuse-3.10.5-1.el7.x86_64
>>>> libvirt-daemon-driver-storage-*gluster*-3.2.0-14.el7_4.3.x86_64
>>>>
>>>> Many thanks,
>>>> Mauro
>>>>
>>>>> Il giorno 29 dic 2017, alle ore 04:59, Nithya Balachandran 
>>>>> <nbalacha at redhat.com <mailto:nbalacha at redhat.com>> ha scritto:
>>>>>
>>>>> Hi Mauro,
>>>>>
>>>>> What version of Gluster are you running and what is your volume 
>>>>> configuration?
>>>>>
>>>>> IIRC, this was seen because of mismatches in the ctime returned to 
>>>>> the client. I don't think there were issues with the files but I 
>>>>> will leave it to Ravi and Raghavendra to comment.
>>>>>
>>>>>
>>>>> Regards,
>>>>> Nithya
>>>>>
>>>>>
>>>>> On 29 December 2017 at 04:10, Mauro Tridici <mauro.tridici at cmcc.it 
>>>>> <mailto:mauro.tridici at cmcc.it>> wrote:
>>>>>
>>>>>
>>>>>     Hi All,
>>>>>
>>>>>     anyone had the same experience?
>>>>>     Could you provide me some information about this error?
>>>>>     It happens only on GlusterFS file system.
>>>>>
>>>>>     Thank you,
>>>>>     Mauro
>>>>>
>>>>>>     Il giorno 20 dic 2017, alle ore 16:57, Mauro Tridici
>>>>>>     <mauro.tridici at cmcc.it <mailto:mauro.tridici at cmcc.it>> ha
>>>>>>     scritto:
>>>>>>
>>>>>>
>>>>>>     Dear Users,
>>>>>>
>>>>>>     I’m experiencing a random problem ( "file changed as we read
>>>>>>     it” error) during tar files creation on a distributed
>>>>>>     dispersed Gluster file system.
>>>>>>
>>>>>>     The tar files seem to be created correctly, but I can see a
>>>>>>     lot of message similar to the following ones:
>>>>>>
>>>>>>     tar: ./year1990/lffd1990050706p.nc
>>>>>>     <http://lffd1990050706p.nc/>.gz: file changed as we read it
>>>>>>     tar: ./year1990/lffd1990052106p.nc
>>>>>>     <http://lffd1990052106p.nc/>.gz: file changed as we read it
>>>>>>     tar: ./year1990/lffd1990052412p.nc
>>>>>>     <http://lffd1990052412p.nc/>.gz: file changed as we read it
>>>>>>     tar: ./year1990/lffd1990091018.nc
>>>>>>     <http://lffd1990091018.nc/>.gz: file changed as we read it
>>>>>>     tar: ./year1990/lffd1990092300p.nc
>>>>>>     <http://lffd1990092300p.nc/>.gz: file changed as we read it
>>>>>>     tar: ./year1990/lffd1990092706p.nc
>>>>>>     <http://lffd1990092706p.nc/>.gz: file changed as we read it
>>>>>>     tar: ./year1990/lffd1990100312p.nc
>>>>>>     <http://lffd1990100312p.nc/>.gz: file changed as we read it
>>>>>>     tar: ./year1990/lffd1990100412.nc
>>>>>>     <http://lffd1990100412.nc/>.gz: file changed as we read it
>>>>>>     tar: ./year1991/lffd1991012106.nc
>>>>>>     <http://lffd1991012106.nc/>.gz: file changed as we read it
>>>>>>     tar: ./year1991/lffd1991010918.nc
>>>>>>     <http://lffd1991010918.nc/>.gz: file changed as we read it
>>>>>>     tar: ./year1991/lffd1991011400.nc
>>>>>>     <http://lffd1991011400.nc/>.gz: file changed as we read it
>>>>>>
>>>>>>     I’m executing the tar command on a CentOS 6.2 operating
>>>>>>     system based server: it is a gluster native client.
>>>>>>
>>>>>>     You can find below some basic info about the gluster client:
>>>>>>
>>>>>>     [root at athena# rpm -qa|grep gluster
>>>>>>     glusterfs-3.10.5-1.el6.x86_64
>>>>>>     centos-release-gluster310-1.0-1.el6.centos.noarch
>>>>>>     glusterfs-client-xlators-3.10.5-1.el6.x86_64
>>>>>>     glusterfs-fuse-3.10.5-1.el6.x86_64
>>>>>>     glusterfs-libs-3.10.5-1.el6.x86_64
>>>>>>
>>>>>>     Can I consider them as a false positive or the created tar
>>>>>>     files will suffer of inconsistence?
>>>>>>     Is it a tar command problem or a gluster problem?
>>>>>>
>>>>>>     Could someone help me to resolve this issue?
>>>>>>
>>>>>>     Thank you very much,
>>>>>>     Mauro
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>     _______________________________________________
>>>>>     Gluster-users mailing list
>>>>>     Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>>>>     http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>     <http://lists.gluster.org/mailman/listinfo/gluster-users>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> -------------------------
> Mauro Tridici
>
> Fondazione CMCC
> CMCC Supercomputing Center
> presso Complesso Ecotekne - Università del Salento -
> Strada Prov.le Lecce - Monteroni sn
> 73100 Lecce  IT
> http://www.cmcc.it
>
> mobile: (+39) 327 5630841
> email: mauro.tridici at cmcc.it <mailto:mauro.tridici at cmcc.it>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180102/5d93d49d/attachment.html>


More information about the Gluster-users mailing list