[Gluster-users] Input/output error when trying to access a file on client
Krutika Dhananjay
kdhananj at redhat.com
Wed Mar 11 12:02:56 UTC 2015
Hi,
Have you gone through https://github.com/gluster/glusterfs/blob/master/doc/debugging/split-brain.md ?
If not, could you go through that once and try the steps given there? Do let us know if something is not clear in the doc.
-Krutika
----- Original Message -----
> From: "Alessandro Ipe" <Alessandro.Ipe at meteo.be>
> To: gluster-users at gluster.org
> Sent: Wednesday, March 11, 2015 4:54:09 PM
> Subject: Re: [Gluster-users] Input/output error when trying to access a file
> on client
> Well, it is even worse. Now when doing a "ls -R" on the volume results in a
> lot of
> [2015-03-11 11:18:31.957505] E
> [afr-self-heal-common.c:233:afr_sh_print_split_brain_log] 0-md1-replicate-2:
> Unable to self-heal contents of '/library' (possible split-brain). Please
> delete the file from all but the preferred subvolume.- Pending matrix: [ [ 0
> 2 ] [ 1 0 ] ]
> [2015-03-11 11:18:31.957692] E
> [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status]
> 0-md1-replicate-2: metadata self heal failed, on /library
> I am desperate...
> A.
> On Wednesday 11 March 2015 12:05:33 you wrote:
> > Hi,
> >
> >
> > When trying to access a file on a gluster client (through fuse), I get an
> > "Input/output error" message.
> >
> > Getting the attributes for the file gives me for the first brick
> > # file: data/glusterfs/md1/brick1/kvm/hail/hail_home.qcow2
> > trusted.afr.md1-client-2=0sAAAAAAAAAAAAAAAA
> > trusted.afr.md1-client-3=0sAAABdAAAAAAAAAAA
> > trusted.gfid=0sOCFPGCdrQ9uyq2yTTPCKqQ==
> >
> > while for the second (replicate) brick
> > # file: data/glusterfs/md1/brick1/kvm/hail/hail_home.qcow2
> > trusted.afr.md1-client-2=0sAAABJAAAAAAAAAAA
> > trusted.afr.md1-client-3=0sAAAAAAAAAAAAAAAA
> > trusted.gfid=0sOCFPGCdrQ9uyq2yTTPCKqQ==
> >
> > It seems that I have a split-brain. How can I solve this issue by resetting
> > the attributes, please ?
> >
> >
> > Thanks,
> >
> >
> > Alessandro.
> >
> > ==================
> > gluster volume info md1
> >
> > Volume Name: md1
> > Type: Distributed-Replicate
> > Volume ID: 6da4b915-1def-4df4-a41c-2f3300ebf16b
> > Status: Started
> > Number of Bricks: 3 x 2 = 6
> > Transport-type: tcp
> > Bricks:
> > Brick1: tsunami1:/data/glusterfs/md1/brick1
> > Brick2: tsunami2:/data/glusterfs/md1/brick1
> > Brick3: tsunami3:/data/glusterfs/md1/brick1
> > Brick4: tsunami4:/data/glusterfs/md1/brick1
> > Brick5: tsunami5:/data/glusterfs/md1/brick1
> > Brick6: tsunami6:/data/glusterfs/md1/brick1
> > Options Reconfigured:
> > server.allow-insecure: on
> > cluster.read-hash-mode: 2
> > features.quota: off
> > performance.write-behind: on
> > performance.write-behind-window-size: 4MB
> > performance.flush-behind: off
> > performance.io-thread-count: 64
> > performance.cache-size: 512MB
> > nfs.disable: on
> > cluster.lookup-unhashed: off
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150311/bad62d32/attachment.html>
More information about the Gluster-users
mailing list