<div dir="ltr"><div dir="ltr">Thanks for the idea, Poornima. Testing shows that xfsdump and xfsrestore is much faster than rsync since it handles small files much better. I don't have extra space to store the dumps but I was able to figure out how to pipe the xfsdump and restore via ssh. For anyone else that's interested:<div><br></div><div>On source machine, run:</div><div><br></div><div>xfsdump -J - /dev/mapper/[vg]-[brick] | ssh root@[destination fqdn] xfsrestore -J - [/path/to/brick]<br></div><div><br></div><div>-Tom</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Apr 1, 2019 at 9:56 PM Poornima Gurusiddaiah <<a href="mailto:pgurusid@redhat.com">pgurusid@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">You could also try xfsdump and xfsrestore if you brick filesystem is xfs and the destination disk can be attached locally? This will be much faster.<div dir="auto"><br></div><div dir="auto">Regards,</div><div dir="auto">Poornima</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Apr 2, 2019, 12:05 AM Tom Fite <<a href="mailto:tomfite@gmail.com" rel="noreferrer" target="_blank">tomfite@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">Hi all,<div><br></div><div>I have a very large (65 TB) brick in a replica 2 volume that needs to be re-copied from scratch. A heal will take a very long time with performance degradation on the volume so I investigated using rsync to do the brunt of the work.</div><div><br></div><div>The command:</div><div><br></div><div>rsync -av -H -X --numeric-ids --progress server1:/data/brick1/gv0 /data/brick1/<br></div><div><br></div><div>Running with -H assures that the hard links in .glusterfs are preserved, and -X preserves all of gluster's extended attributes.</div><div><br></div><div>I've tested this on my test environment as follows:</div><div><br></div><div>1. Stop glusterd and kill procs</div><div>2. Move brick volume to backup dir</div><div>3. Run rsync</div><div>4. Start glusterd</div><div>5. Observe gluster status</div><div><br></div><div>All appears to be working correctly. Gluster status reports all bricks online, all data is accessible in the volume, and I don't see any errors in the logs.</div><div><br></div><div>Anybody else have experience trying this?</div><div><br></div><div>Thanks</div><div>-Tom</div></div></div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" rel="noreferrer noreferrer" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div>
</blockquote></div>