[Gluster-users] self heal failed, on /

Mohammed Rafi K C rkavunga at redhat.com
Fri Feb 24 09:05:59 UTC 2017



On 02/24/2017 01:25 PM, max.degraaf at kpn.com wrote:
>
> Any advice on the sequence of updating? Server of client first?
>
>  
>
> I assume it’s a simple in-place update where configuration is
> preserved. Right?
>

Configuration will be preserved. I don't know actual procedure for
rolling upgrade (other than servers first with one after other). May be
somebody else in this ML can provide you more info or just a google
search will find you some blogs.


Regards
Rafi KC


>  
>
>  
>
> *Van:*Mohammed Rafi K C [mailto:rkavunga at redhat.com]
> *Verzonden:* vrijdag 24 februari 2017 08:49
> *Aan:* Graaf, Max de; gluster-users at gluster.org
> *Onderwerp:* Re: [Gluster-users] self heal failed, on /
>
>  
>
>  
>
>  
>
> On 02/24/2017 11:47 AM, max.degraaf at kpn.com
> <mailto:max.degraaf at kpn.com> wrote:
>
>     The version on the server of this specific mount is 3.7.11. The
>     client is running version 3.4.2.
>
>
> It is always better to have everything in one version, all clients and
> all servers. In this case there is huge gap between the versions, 3.7
> and 3.4 .
>
> An additional thing is the code running on 3.4 is replicaV1 code and
> on 3.7 it v2, meaning there is huge difference to the logic of
> replication/healing. So I recommend to keep all the gluster instances
> to the same version
>
>
> ~Rafi
>
>
>
>  
>
> There is more to that. This client is actually mounting to volumes
> where the other server is running 3.4.2 as well. What’s your advice,
> update that other server to 3.7.11 (or higher) first? Of start with
> the client update?
>
>  
>
> *Van:*Mohammed Rafi K C [mailto:rkavunga at redhat.com]
> *Verzonden:* vrijdag 24 februari 2017 07:02
> *Aan:* Graaf, Max de; gluster-users at gluster.org
> <mailto:gluster-users at gluster.org>
> *Onderwerp:* Re: [Gluster-users] self heal failed, on /
>
>  
>
>  
>
>  
>
> On 02/23/2017 12:18 PM, max.degraaf at kpn.com
> <mailto:max.degraaf at kpn.com> wrote:
>
>     Hi,
>
>      
>
>     We have a 4 node glusterfs setup that seems to be running without
>     any problems. We can’t find any problems with replication or whatever.
>
>      
>
>     We also have 4 machines running the glusterfs client. On all 4
>     machines we see the following error in the logs at random moments:
>
>      
>
>     [2017-02-23 00:04:33.168778] I
>     [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status]
>     0-aab-replicate-0:  metadata self heal  is successfully
>     completed,   metadata self heal from source aab-client-0 to
>     aab-client-1,  aab-client-2,  aab-client-3,  metadata - Pending
>     matrix:  [ [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] ], on /
>
>     [2017-02-23 00:09:34.431089] E
>     [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status]
>     0-aab-replicate-0:  metadata self heal  failed,   on /
>
>     [2017-02-23 00:14:34.948975] I
>     [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status]
>     0-aab-replicate-0:  metadata self heal  is successfully
>     completed,   metadata self heal from source aab-client-0 to
>     aab-client-1,  aab-client-2,  aab-client-3,  metadata - Pending
>     matrix:  [ [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] ], on /
>
>      
>
>     The content within the glusterfs filesystems is rather static with
>     only minor changes on it. This “self heal  failed” is printed
>     randomly in the logs on the glusterfs client. It’s printed even at
>     moment where nothing has changed within the glusterfs filesystem.
>     When it is printed, its never on multiple servers at the same
>     time. What we also don’t understand : the error indicates self
>     heal failed on root “/”. In the root of this glusterfs mount there
>     only 2 folders and no files are ever written at the root level.
>
>      
>
>     Any thoughts?
>
>
> From the logs, It looks like an older version of gluster , probably
> 3.5 . Please confirm your glusterfs version. The version is pretty old
> and it may be moved End of Life. And this is AFR v1 , where the latest
> stable version runs with AFRv2.
>
> So I would suggest you to upgrade to a later version may be 3.8 .
>
> If you still want to go with this version, I can give it a try. Let me
> know the version, volume info and volume status. Still I will suggest
> to upgrade ;)
>
>
> Regards
> Rafi KC
>
>
>
>
>
>  
>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>  
>
>  
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170224/b8ba17fc/attachment.html>


More information about the Gluster-users mailing list