<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div><br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Poornima Gurusiddaiah" <pgurusid@redhat.com><br><b>To: </b>"Tom Fite" <tomfite@gmail.com><br><b>Cc: </b>"Gluster-users" <gluster-users@gluster.org><br><b>Sent: </b>Tuesday, April 9, 2019 9:53:02 AM<br><b>Subject: </b>Re: [Gluster-users] Rsync in place of heal after brick failure<br><div><br></div><div dir="ltr"><div dir="ltr"><div dir="auto"><div><br><div><br></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Apr 8, 2019, 6:31 PM Tom Fite <<a href="mailto:tomfite@gmail.com" target="_blank" data-mce-href="mailto:tomfite@gmail.com">tomfite@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" data-mce-style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid #cccccc; padding-left: 1ex;"><div dir="ltr"><div dir="ltr">Thanks for the idea, Poornima. Testing shows that xfsdump and xfsrestore is much faster than rsync since it handles small files much better. I don't have extra space to store the dumps but I was able to figure out how to pipe the xfsdump and restore via ssh. For anyone else that's interested:<div><br></div><div>On source machine, run:</div><div><br></div><div>xfsdump -J - /dev/mapper/[vg]-[brick] | ssh root@[destination fqdn] xfsrestore -J - [/path/to/brick]<br></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">That's great. Is it possible for you to write a short summary on this in your blog or in the Gluster/blogs [1]? The summary would be very helpful for other users as well. If you could also include details on the approaches you explored and the time each would take for the 65 TB data. Thanks in advance.</div><div dir="auto"><br></div><div dir="auto">We will also see how we could incorporate this in replace brick/offline migration.</div><div dir="auto"><br></div><div dir="auto">[1] <a href="https://gluster.github.io/devblog/write-for-gluster" target="_blank" data-mce-href="https://gluster.github.io/devblog/write-for-gluster">https://gluster.github.io/devblog/write-for-gluster</a><br data-mce-bogus="1"></div><div dir="auto"><br></div><div dir="auto">Thanks,</div><div dir="auto">Poornima</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" data-mce-style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid #cccccc; padding-left: 1ex;"><div dir="ltr"><div dir="ltr"><div><br></div><div><br></div><div>-Tom</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Apr 1, 2019 at 9:56 PM Poornima Gurusiddaiah <<a href="mailto:pgurusid@redhat.com" rel="noreferrer" target="_blank" data-mce-href="mailto:pgurusid@redhat.com">pgurusid@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" data-mce-style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid #cccccc; padding-left: 1ex;"><div dir="auto">You could also try xfsdump and xfsrestore if you brick filesystem is xfs and the destination disk can be attached locally? This will be much faster.<div dir="auto"><br></div><div dir="auto">Regards,</div><div dir="auto">Poornima</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Apr 2, 2019, 12:05 AM Tom Fite <<a href="mailto:tomfite@gmail.com" rel="noreferrer noreferrer" target="_blank" data-mce-href="mailto:tomfite@gmail.com">tomfite@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" data-mce-style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid #cccccc; padding-left: 1ex;"><div dir="ltr"><div dir="ltr">Hi all,<div><br></div><div>I have a very large (65 TB) brick in a replica 2 volume that needs to be re-copied from scratch. A heal will take a very long time with performance degradation on the volume so I investigated using rsync to do the brunt of the work.</div><div><br></div><div>The command:</div><div><br></div><div>rsync -av -H -X --numeric-ids --progress server1:/data/brick1/gv0 /data/brick1/<br></div><div><br></div><div>Running with -H assures that the hard links in .glusterfs are preserved, and -X preserves all of gluster's extended attributes.</div><div><br></div><div>I've tested this on my test environment as follows:</div><div><br></div><div>1. Stop glusterd and kill procs</div><div>2. Move brick volume to backup dir</div><div>3. Run rsync</div><div>4. Start glusterd</div><div>5. Observe gluster status</div></div></div></blockquote></div></blockquote></div></blockquote><div><br></div><div>Just want to add one step to quickly test this.<br></div><div>You can kill other brick which you did not touch and then try to access your volume. This will ensure that all the file operations are falling on this<br></div><div>new brick and you can see if everything is accessible. <br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" data-mce-style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid #cccccc; padding-left: 1ex;"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" data-mce-style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid #cccccc; padding-left: 1ex;"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" data-mce-style="margin: 0px 0px 0px 0.8ex; border-left: 1px solid #cccccc; padding-left: 1ex;"><div dir="ltr"><div dir="ltr"><div><br></div><div>All appears to be working correctly. Gluster status reports all bricks online, all data is accessible in the volume, and I don't see any errors in the logs.</div><div><br></div><div>Anybody else have experience trying this?</div><div><br></div><div>Thanks</div><div>-Tom</div></div></div>_______________________________________________<br> Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" rel="noreferrer noreferrer noreferrer" target="_blank" data-mce-href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer noreferrer noreferrer noreferrer" target="_blank" data-mce-href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br data-mce-bogus="1"></blockquote></div></blockquote></div></blockquote></div></div></div></div></div><br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>https://lists.gluster.org/mailman/listinfo/gluster-users</div><div><br></div></div></body></html>