[Gluster-users] Input/output error when trying to access a file on client

Joe Julian joe at julianfamily.org
Wed Mar 11 14:52:11 UTC 2015


http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/

On March 11, 2015 4:24:09 AM PDT, Alessandro Ipe <Alessandro.Ipe at meteo.be> wrote:
>Well, it is even worse. Now when doing  a "ls -R" on the volume results
>in a lot of 
>
>[2015-03-11 11:18:31.957505] E
>[afr-self-heal-common.c:233:afr_sh_print_split_brain_log]
>0-md1-replicate-2: Unable to self-heal contents of '/library' (possible
>split-brain). Please delete the file from all but the preferred
>subvolume.- Pending matrix:  [ [ 0 2 ] [ 1 0 ] ]
>[2015-03-11 11:18:31.957692] E
>[afr-self-heal-common.c:2868:afr_log_self_heal_completion_status]
>0-md1-replicate-2:  metadata self heal  failed,   on /library
>
>I am desperate...
>
>
>A.
>
>
>On Wednesday 11 March 2015 12:05:33 you wrote:
>> Hi,
>> 
>> 
>> When trying to access a file on a gluster client (through fuse), I
>get an
>> "Input/output error" message.
>> 
>> Getting the attributes for the file gives me for the first brick
>> # file: data/glusterfs/md1/brick1/kvm/hail/hail_home.qcow2
>> trusted.afr.md1-client-2=0sAAAAAAAAAAAAAAAA
>> trusted.afr.md1-client-3=0sAAABdAAAAAAAAAAA
>> trusted.gfid=0sOCFPGCdrQ9uyq2yTTPCKqQ==
>> 
>> while for the second (replicate) brick
>> # file: data/glusterfs/md1/brick1/kvm/hail/hail_home.qcow2
>> trusted.afr.md1-client-2=0sAAABJAAAAAAAAAAA
>> trusted.afr.md1-client-3=0sAAAAAAAAAAAAAAAA
>> trusted.gfid=0sOCFPGCdrQ9uyq2yTTPCKqQ==
>> 
>> It seems that I have a split-brain. How can I solve this issue by
>resetting
>> the attributes, please ?
>> 
>> 
>> Thanks,
>> 
>> 
>> Alessandro.
>> 
>> ==================
>> gluster volume info md1
>> 
>> Volume Name: md1
>> Type: Distributed-Replicate
>> Volume ID: 6da4b915-1def-4df4-a41c-2f3300ebf16b
>> Status: Started
>> Number of Bricks: 3 x 2 = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: tsunami1:/data/glusterfs/md1/brick1
>> Brick2: tsunami2:/data/glusterfs/md1/brick1
>> Brick3: tsunami3:/data/glusterfs/md1/brick1
>> Brick4: tsunami4:/data/glusterfs/md1/brick1
>> Brick5: tsunami5:/data/glusterfs/md1/brick1
>> Brick6: tsunami6:/data/glusterfs/md1/brick1
>> Options Reconfigured:
>> server.allow-insecure: on
>> cluster.read-hash-mode: 2
>> features.quota: off
>> performance.write-behind: on
>> performance.write-behind-window-size: 4MB
>> performance.flush-behind: off
>> performance.io-thread-count: 64
>> performance.cache-size: 512MB
>> nfs.disable: on
>> cluster.lookup-unhashed: off
>
>_______________________________________________
>Gluster-users mailing list
>Gluster-users at gluster.org
>http://www.gluster.org/mailman/listinfo/gluster-users

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150311/59b834a4/attachment.html>


More information about the Gluster-users mailing list