[Gluster-users] Gluster Self Heal

Bobby Jacob bobby.jacob at alshaya.com
Tue Jul 9 06:54:20 UTC 2013


OK, So is there any workaround. ?? I have reployed GlusterFS 3.3.1. I have kept it real simple.  

Type: Replicate
Volume ID: 3e002989-6c9f-4f83-9bd5-c8a3442d8721
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: KWTTESTGSNODE002:/mnt/cloudbrick
Brick2: ZAJILTESTGSNODE001:/mnt/cloudbrick


Thanks & Regards,
Bobby Jacob

-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Toby Corkindale
Sent: Tuesday, July 09, 2013 9:50 AM
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] Gluster Self Heal

On 09/07/13 15:38, Bobby Jacob wrote:
> Hi,
>
> I have a 2-node gluster with 3 TB storage.
>
> 1)I believe the “glusterfsd” is responsible for the self healing 
> between the 2 nodes.
>
> 2)Due to some network error, the replication stopped for some reason 
> but the application was accessing the data from node1.  When I 
> manually try to start “glusterfsd” service, its not starting.
>
> Please advice on how I can maintain the integrity of the data so that 
> we have all the data in both the locations. ??

There were some bugs in the self-heal daemon present in 3.3.0 and 3.3.1. 
Our systems see the SHD crash out with segfaults quite often, and it does not recover.

I reported this bug a long time ago, and it was fixed in trunk relatively quickly -- however version 3.3.2 has still not been released, despite the fix being found six months ago.

I find this quite disappointing.

T
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list