[Gluster-users] answer Re: AFR recovery question

Krishna Srinivas krishna at zresearch.com
Sat Oct 11 04:19:27 UTC 2008


Keith,

It was difficult to follow your mail. What is your setup? I understand
that when destination of symlinks don't exist it causes problems
during selfheal, is that right? Do you have server side afr or client
side?

Krishna

On Fri, Oct 10, 2008 at 4:16 PM, Keith Freedman <freedman at freeformit.com> wrote:
> just a quick status for anyone who cares.
>
> once the find seemed to work properly for a few directories I
> remounted both servers with the proper AFR confi and they seem to be
> working just fine.  auto-healing serverB as appropriate and sever1
> getting updated when a file on serverb is updated.
>
> :)
>
>
> At 03:22 AM 10/10/2008, Keith Freedman wrote:
>>no one answered me so I'll just report my findings:
>>
>>Server1 and ServerB  full of data..  Gluster 1.4pre5.  FedoraCore9
>>Disk on serverb crashes, loose everything.  re-install
>>copied over my AFR config from server1  change IP addresses as appropriate.
>>
>>using the find /home/XXXX -type f -print0 | xargs -0 head -c1 > /dev/null
>>to auto-heal them and it seems to be going just fine.
>>
>>once that's finished, I'll re-add serverB to server1's AFR config and
>>I presume it'll be fine.
>>
>>anyway, it was a minor irritation and overall the auto-healing once
>>going has been a lifesaver.
>>
>>Keith
>>
>>At 06:21 AM 10/9/2008, Keith Freedman wrote:
>> >I have 2 servers that AFR eachother.
>> >
>> >one of them suffered a drive failure and is being rebuilt.
>> >
>> >the question is.  What will happen if I just mount the empty drive
>> >back as the AFR node.
>> >
>> >will it just start grabbing the data from the other server (which is
>> >exactly what I want), OR
>> >will it start deleting the data from the other server (which is
>> terribly bad).
>> >
>> >Another thought was to, on the current working server with good data,
>> >disable the remote afr node from it's config (so it's only using AFR
>> >on itself), and then leave the other machines config as is, and turn it on.
>> >This way I can be sure that the node with the data wont go nuts and
>> >start deleting, but that updates to it will get replicated to the
>> >other machine.
>> >
>> >this particular set is running 1.4pre5 if that changes the answer.
>> >
>> >Thanks,
>> >Keith
>> >
>> >
>> >_______________________________________________
>> >Gluster-users mailing list
>> >Gluster-users at gluster.org
>> >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>>_______________________________________________
>>Gluster-users mailing list
>>Gluster-users at gluster.org
>>http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>




More information about the Gluster-users mailing list