[Gluster-users] How to re-sync

Chad ccolumbu at hotmail.com
Wed Mar 10 17:28:29 UTC 2010


I guess I just see healing differently than the development team does.
I hope you will consider adding an option that will allow switching between the current auto self-healing and a new manual self-healing I will outline below.

How I see "manual self-healing" working:
I look at healing similarly to the way that mdadm looks at disk drives in raid 5.

1. I think when a server goes down it should be "flagged as faulty" and send out notifications (in this case to all the clients notifying them not to use the 
downed server [I hope this removes the small hang delay for clients 2-N], as well as via e-mail to the sysadmin).
2. The down server is then taken "out of service" but the rest of the array continues in "degraded mode".
3. Then when the down server comes back and starts glusterfsd it remains "faulty" and no client can use it.
4. A sysadmin comes in and "fixes" the problem that took the server down in the first place.
5. A sysadmin changes the "faulty flag" to a "resync flag" ("resync flag" tells the clients to write to the machine, but not read from it while it recovers).
6. A sysadmin then runs a re-sync (ls -alR).
7. Once the re-sync completes a sysadmin runs a "re-add" command removing the "faulty flag" and the clients can begin using the server again.

I feel that this method removes the chance that a server goes down, gets out of sync, recovers on its own (or though automated tools), and starts providing 
services with some old data.
In the middle of the night if the server goes down, and nagios trips a reboot, then the server comes up, no sysadmin is logged in to run the "ls -alR" to get 
the server to re-sync.

^C



Tejas N. Bhise wrote:
> Ed, Chad, Stephen,
> 
> We believe we have fixed all ( known ) problems with self-heal 
> in the latest releases and hence we would be very interested in 
> getting diagnostics if you can reproduce the problem or see it 
> again frequently. 
> 
> Please collect the logs by running the client and servers with 
> log level TRACE and then reproducing the problem.
> 
> Also collect the backend extended attributes of the file on 
> both servers before self-heal was triggered. This command can be 
> used to get that info:
> 
> # getfattr -d -m '.*' -e hex <filename>
> 
> Thank you for you help to debug if any new problem shows up.
> Feel free to ask if you have any queries about this.
> 
> Regards,
> Tejas.
> 
> 
> 
> 
> ----- Original Message -----
> From: "Ed W" <lists at wildgooses.com>
> To: "Gluster Users" <gluster-users at gluster.org>
> Sent: Monday, March 8, 2010 5:22:40 PM GMT +05:30 Chennai, Kolkata, Mumbai, New Delhi
> Subject: Re: [Gluster-users] How to re-sync
> 
> On 07/03/2010 16:02, Chad wrote:
>> Is there a gluster developer out there working on this problem 
>> specifically?
>> Could we add some kind of "sync done" command that has to be run 
>> manually and until it is the failed node is not used?
>> The bottom line for me is that I would much rather run on a 
>> performance degraded array until a sysadmin intervenes, than loose any 
>> data.
> 
> I'm only in evaluation mode at the moment, but resolving split brain is 
> something which is terrifying me at the moment and I have been giving 
> some thought to how it needs to be done with various solutions
> 
> In the case of gluster it really does seem very important to figure out 
> a reliable way to know when the system is fully synced again if you have 
> had an outage.  For example a not unrealistic situation if you were 
> doing a bunch of upgrades would be:
> 
> - Turn off server 1 (S1) and upgrade, server 2 (S2) deviates from S1
> - Turn on server 1 and expect to sync all new changes from while we were 
> down - key expectation here is that S1 only includes changes from S2 and 
> never sends changes.
> - Some event marks sync complete so that we can turn off S2 and upgrade it
> 
> The problem otherwise if you don't do the sync is that you turn off S2 
> and now S1 doesn't know about changes made while it's off and serves up 
> incomplete information.  Split brain can occur where a file is changed 
> on both servers while they couldn't talk to each other and then changes 
> must be lost...
> 
> I suppose a really cool translator could be written to track changes 
> made to an AFR group where one member is missing and then the out of 
> sync file list would be resupplied once it was turned on again in order 
> to speed up replication... Kind of a lot of work for a small 
> improvement, but could be interesting to create...
> 
> Perhaps some dev has some other suggestions on a "procedure" to follow 
> to avoid split brain in the situation that we need to turn off all 
> servers one by one in an AFR group?
> 
> Thanks
> 
> Ed W
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> 



More information about the Gluster-users mailing list