[Gluster-users] Rsync in place of heal after brick failure
Tom Fite
tomfite at gmail.com
Mon Apr 8 13:01:08 UTC 2019
Thanks for the idea, Poornima. Testing shows that xfsdump and xfsrestore is
much faster than rsync since it handles small files much better. I don't
have extra space to store the dumps but I was able to figure out how to
pipe the xfsdump and restore via ssh. For anyone else that's interested:
On source machine, run:
xfsdump -J - /dev/mapper/[vg]-[brick] | ssh root@[destination fqdn]
xfsrestore -J - [/path/to/brick]
-Tom
On Mon, Apr 1, 2019 at 9:56 PM Poornima Gurusiddaiah <pgurusid at redhat.com>
wrote:
> You could also try xfsdump and xfsrestore if you brick filesystem is xfs
> and the destination disk can be attached locally? This will be much faster.
>
> Regards,
> Poornima
>
> On Tue, Apr 2, 2019, 12:05 AM Tom Fite <tomfite at gmail.com> wrote:
>
>> Hi all,
>>
>> I have a very large (65 TB) brick in a replica 2 volume that needs to be
>> re-copied from scratch. A heal will take a very long time with performance
>> degradation on the volume so I investigated using rsync to do the brunt of
>> the work.
>>
>> The command:
>>
>> rsync -av -H -X --numeric-ids --progress server1:/data/brick1/gv0
>> /data/brick1/
>>
>> Running with -H assures that the hard links in .glusterfs are preserved,
>> and -X preserves all of gluster's extended attributes.
>>
>> I've tested this on my test environment as follows:
>>
>> 1. Stop glusterd and kill procs
>> 2. Move brick volume to backup dir
>> 3. Run rsync
>> 4. Start glusterd
>> 5. Observe gluster status
>>
>> All appears to be working correctly. Gluster status reports all bricks
>> online, all data is accessible in the volume, and I don't see any errors in
>> the logs.
>>
>> Anybody else have experience trying this?
>>
>> Thanks
>> -Tom
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190408/f99c5363/attachment.html>
More information about the Gluster-users
mailing list