[Gluster-users] Self-Heal Daemon/Volume Replication 3.4 Beta 2 (and 3) issues

Pranith Kumar Karampuri pkarampu at redhat.com
Tue Jun 25 15:01:08 UTC 2013


Ryan,
      Self-heald log file is glustershd.log* files. glustershd stands for gluster Self-Heal Daemon. Could you send self-heal daemon log files from all the machines of the cluster.

Pranith.
----- Original Message -----
> From: "Ryan Aydelott" <ryade at mcs.anl.gov>
> To: gluster-users at gluster.org
> Sent: Friday, June 21, 2013 9:50:16 PM
> Subject: [Gluster-users] Self-Heal Daemon/Volume Replication 3.4 Beta 2 (and	3) issues
> 
> This issue appears to have bothered another user and I'm having a similar
> problem:
> 
> http://comments.gmane.org/gmane.comp.file-systems.gluster.user/11666
> 
> I lost the raid array on a brick the other day, upon rebuilding the array and
> attempting to replicate from it's mate (replica 2) - I receive the following
> message:
> 
> gluster> volume heal whisper full
> Self-heal daemon is not running. Check self-heal daemon log file.
> 
> Then checking the logs I see that no self heal logfile exists:
> http://pastie.org/8066714
> 
> Gluster volume status shows: http://pastie.org/8066724
> 
> The logs are filled with:
> 
> glustershd.log:2820: option background-self-heal-count 0
> glustershd.log:2826: option iam-self-heal-daemon yes
> glustershd.log:2827: option self-heal-daemon on
> glustershd.log:2828: option entry-self-heal on
> glustershd.log:2829: option data-self-heal on
> glustershd.log:2830: option metadata-self-heal on
> 
> a snip of glustershd-server.vol shows:
> 
> volume whisper-replicate-0
> type cluster/replicate
> option iam-self-heal-daemon yes
> option self-heal-daemon on
> option entry-self-heal on
> option data-self-heal on
> option metadata-self-heal on
> option background-self-heal-count 0
> subvolumes whisper-client-0 whisper-client-1
> end-volume
> 
> volume whisper-replicate-1
> type cluster/replicate
> option iam-self-heal-daemon yes
> option self-heal-daemon on
> option entry-self-heal on
> option data-self-heal on
> option metadata-self-heal on
> option background-self-heal-count 0
> subvolumes whisper-client-2 whisper-client-3
> end-volume
> 
> ….
> 
> I'm currently running:
> 
> glusterfs 3.4.0beta3 built on Jun 14 2013 16:40:17
> Repository revision: git://git.gluster.com/glusterfs.git
> Copyright (c) 2006-2011 Gluster Inc. < http://www.gluster.com >
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU General
> Public License.
> 
> Process info: root 27689 0.1 1.3 1425584 657344 ? Ssl 09:48 0:01
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
> /var/lib/glusterd/glustershd/run/glustershd.pid -l
> /var/log/glusterfs/glustershd.log -S
> /var/run/9f827e9bc20e176d0499e26897d68d71.socket --xlator-option
> *replicate*.node-uuid=a50dffd6-de39-42b5-a83c-e36c79f6e5c3
> 
> This was installed on Precise using the semiosis PPA
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list