[Gluster-users] folder not being healed

Krutika Dhananjay kdhananj at redhat.com
Mon Jan 4 11:44:29 UTC 2016


Hi, 

Could you share the output of 
# getfattr -d -m . -e hex <abs-path-to-media/ga/live/a> 

from both the bricks? 

-Krutika 
----- Original Message -----

> From: "Andreas Tsaridas" <andreas.tsaridas at gmail.com>
> To: gluster-users at gluster.org
> Sent: Monday, January 4, 2016 5:10:58 PM
> Subject: [Gluster-users] folder not being healed

> Hello,

> I have a cluster of two replicated nodes in glusterfs 3.6.3 in RedHat 6.6.
> Problem is that a specific folder is always trying to be healed but never
> gets healed. This has been going on for 2 weeks now.

> -----

> # gluster volume status
> Status of volume: share
> Gluster process Port Online Pid
> ------------------------------------------------------------------------------
> Brick 172.16.4.1:/srv/share/glusterfs 49152 Y 10416
> Brick 172.16.4.2:/srv/share/glusterfs 49152 Y 19907
> NFS Server on localhost 2049 Y 22664
> Self-heal Daemon on localhost N/A Y 22676
> NFS Server on 172.16.4.2 2049 Y 19923
> Self-heal Daemon on 172.16.4.2 N/A Y 19937

> Task Status of Volume share
> ------------------------------------------------------------------------------
> There are no active volume tasks

> ------

> # gluster volume info

> Volume Name: share
> Type: Replicate
> Volume ID: 17224664-645c-48b7-bc3a-b8fc84c6ab30
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 172.16.4.1:/srv/share/glusterfs
> Brick2: 172.16.4.2:/srv/share/glusterfs
> Options Reconfigured:
> cluster.background-self-heal-count: 20
> cluster.heal-timeout: 2
> performance.normal-prio-threads: 64
> performance.high-prio-threads: 64
> performance.least-prio-threads: 64
> performance.low-prio-threads: 64
> performance.flush-behind: off
> performance.io-thread-count: 64

> ------

> # gluster volume heal share info
> Brick web01.rsdc:/srv/share/glusterfs/
> /media/ga/live/a - Possibly undergoing heal

> Number of entries: 1

> Brick web02.rsdc:/srv/share/glusterfs/
> Number of entries: 0

> -------

> # gluster volume heal share info split-brain
> Gathering list of split brain entries on volume share has been successful

> Brick 172.16.4.1:/srv/share/glusterfs
> Number of entries: 0

> Brick 172.16.4.2:/srv/share/glusterfs
> Number of entries: 0

> -------

> ==> /var/log/glusterfs/glustershd.log <==
> [2016-01-04 11:35:33.004831] I
> [afr-self-heal-entry.c:554:afr_selfheal_entry_do] 0-share-replicate-0:
> performing entry selfheal on b13199a1-464c-4491-8464-444b3f7eeee3
> [2016-01-04 11:36:07.449192] W [client-rpc-fops.c:2772:client3_3_lookup_cbk]
> 0-share-client-1: remote operation failed: No data available. Path: (null)
> (00000000-0000-0000-0000-000000000000)
> [2016-01-04 11:36:07.449706] W [client-rpc-fops.c:240:client3_3_mknod_cbk]
> 0-share-client-1: remote operation failed: File exists. Path: (null)

> Could you please advise ?

> Kind regards,

> AT

> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160104/b5089f83/attachment.html>


More information about the Gluster-users mailing list