[Gluster-users] When self-healing is triggered?

Flavio Pessoa nvezes at live.com
Tue May 29 08:40:31 UTC 2012

Hi, When
 self-healing is triggered? As you can see below it has been triggered 
however I checked the logs and there was not any disconnection from the 
FTP servers.So,
 I can’t understand why it has been triggered. Client-7 comes online, so
 may the image differ due some file corrupted? Or for some reason the 
ftp server was not able to write in one of the replicated storages (client-6 and client-7)? [2012-05-22 17:06:06.133382] I [client-handshake.c:863:client_setvolume_cbk] 0-client-7: Connected to, attached to remote volume 'brick1'.[2012-05-22 17:06:06.133410] I [afr-common.c:2552:afr_notify] 0-replicate-3: Subvolume 'client-7' came back up; going online.[2012-05-22 17:06:06.138600] I [fuse-bridge.c:3316:fuse_graph_setup] 0-fuse: switched graph to 0[2012-05-22
 17:06:06.138805] I [fuse-bridge.c:2897:fuse_init] 0-glusterfs-fuse: 
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.16[2012-05-22 17:06:06.139359] I [afr-common.c:819:afr_fresh_lookup_cbk] 0-replicate-0: added root inode[2012-05-22 17:06:06.140799] I [afr-common.c:819:afr_fresh_lookup_cbk] 0-replicate-1: added root inode[2012-05-22 17:06:06.140841] I [afr-common.c:819:afr_fresh_lookup_cbk] 0-replicate-2: added root inode[2012-05-22 17:06:06.141267] I [afr-common.c:819:afr_fresh_lookup_cbk] 0-replicate-3: added root inode[2012-05-22 17:06:06.151597] I [client-handshake.c:863:client_setvolume_cbk] 0-client-6: Connected to, attached to remote volume 'brick1'.[2012-05-22 17:06:06.151715] I [client-handshake.c:863:client_setvolume_cbk] 0-client-4: Connected to, attached to remote volume 'brick0'.[2012-05-22 17:21:30.895793] I [afr-common.c:716:afr_lookup_done] 0-replicate-3: background  entry self-heal triggered. path: /02720-store0/2012-05-21[2012-05-22 17:21:30.914237] I [afr-self-heal-common.c:1527:afr_self_heal_completion_cbk] 0-replicate-3: background  entry self-heal completed on /02720-store0/2012-05-21 Regards
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120529/2c24ed70/attachment.html>

More information about the Gluster-users mailing list