[Gluster-users] self heal failed, on /

max.degraaf at kpn.com max.degraaf at kpn.com
Fri Feb 24 06:17:41 UTC 2017


The version on the server of this specific mount is 3.7.11. The client is running version 3.4.2.

There is more to that. This client is actually mounting to volumes where the other server is running 3.4.2 as well. What's your advice, update that other server to 3.7.11 (or higher) first? Of start with the client update?

Van: Mohammed Rafi K C [mailto:rkavunga at redhat.com]
Verzonden: vrijdag 24 februari 2017 07:02
Aan: Graaf, Max de; gluster-users at gluster.org
Onderwerp: Re: [Gluster-users] self heal failed, on /




On 02/23/2017 12:18 PM, max.degraaf at kpn.com<mailto:max.degraaf at kpn.com> wrote:
Hi,

We have a 4 node glusterfs setup that seems to be running without any problems. We can't find any problems with replication or whatever.

We also have 4 machines running the glusterfs client. On all 4 machines we see the following error in the logs at random moments:

[2017-02-23 00:04:33.168778] I [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-aab-replicate-0:  metadata self heal  is successfully completed,   metadata self heal from source aab-client-0 to aab-client-1,  aab-client-2,  aab-client-3,  metadata - Pending matrix:  [ [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] ], on /
[2017-02-23 00:09:34.431089] E [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-aab-replicate-0:  metadata self heal  failed,   on /
[2017-02-23 00:14:34.948975] I [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-aab-replicate-0:  metadata self heal  is successfully completed,   metadata self heal from source aab-client-0 to aab-client-1,  aab-client-2,  aab-client-3,  metadata - Pending matrix:  [ [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] ], on /

The content within the glusterfs filesystems is rather static with only minor changes on it. This "self heal  failed" is printed randomly in the logs on the glusterfs client. It's printed even at moment where nothing has changed within the glusterfs filesystem. When it is printed, its never on multiple servers at the same time. What we also don't understand : the error indicates self heal failed on root "/". In the root of this glusterfs mount there only 2 folders and no files are ever written at the root level.

Any thoughts?

>From the logs, It looks like an older version of gluster , probably 3.5 . Please confirm your glusterfs version. The version is pretty old and it may be moved End of Life. And this is AFR v1 , where the latest stable version runs with AFRv2.

So I would suggest you to upgrade to a later version may be 3.8 .

If you still want to go with this version, I can give it a try. Let me know the version, volume info and volume status. Still I will suggest to upgrade ;)


Regards
Rafi KC









_______________________________________________

Gluster-users mailing list

Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>

http://lists.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170224/11697dd1/attachment.html>


More information about the Gluster-users mailing list