[Gluster-users] Rsync in place of heal after brick failure

Strahil hunter86_bg at yahoo.com
Tue Apr 9 15:34:53 UTC 2019


Correct me if I'm wrong but  I have been left with the impression that cluster heal is multi-process ,  multi-connection event and would benefit from a bonding like balance-alb.

I don't have much experience with xfsdump, but it looks like a single process that uses single connection and thus only LACP can be beneficial.

Am I wrong?

Best Regards,
Strahil NikolovOn Apr 9, 2019 07:10, Aravinda <avishwan at redhat.com> wrote:
>
> On Mon, 2019-04-08 at 09:01 -0400, Tom Fite wrote: 
> > Thanks for the idea, Poornima. Testing shows that xfsdump and 
> > xfsrestore is much faster than rsync since it handles small files 
> > much better. I don't have extra space to store the dumps but I was 
> > able to figure out how to pipe the xfsdump and restore via ssh. For 
> > anyone else that's interested: 
> > 
> > On source machine, run: 
> > 
> > xfsdump -J - /dev/mapper/[vg]-[brick] | ssh root@[destination fqdn] 
> > xfsrestore -J - [/path/to/brick] 
>
> Nice. Thanks for sharing 
>
> > 
> > -Tom 
> > 
> > On Mon, Apr 1, 2019 at 9:56 PM Poornima Gurusiddaiah < 
> > pgurusid at redhat.com> wrote: 
> > > You could also try xfsdump and xfsrestore if you brick filesystem 
> > > is xfs and the destination disk can be attached locally? This will 
> > > be much faster. 
> > > 
> > > Regards, 
> > > Poornima 
> > > 
> > > On Tue, Apr 2, 2019, 12:05 AM Tom Fite <tomfite at gmail.com> wrote: 
> > > > Hi all, 
> > > > 
> > > > I have a very large (65 TB) brick in a replica 2 volume that 
> > > > needs to be re-copied from scratch. A heal will take a very long 
> > > > time with performance degradation on the volume so I investigated 
> > > > using rsync to do the brunt of the work. 
> > > > 
> > > > The command: 
> > > > 
> > > > rsync -av -H -X --numeric-ids --progress server1:/data/brick1/gv0 
> > > > /data/brick1/ 
> > > > 
> > > > Running with -H assures that the hard links in .glusterfs are 
> > > > preserved, and -X preserves all of gluster's extended attributes. 
> > > > 
> > > > I've tested this on my test environment as follows: 
> > > > 
> > > > 1. Stop glusterd and kill procs 
> > > > 2. Move brick volume to backup dir 
> > > > 3. Run rsync 
> > > > 4. Start glusterd 
> > > > 5. Observe gluster status 
> > > > 
> > > > All appears to be working correctly. Gluster status reports all 
> > > > bricks online, all data is accessible in the volume, and I don't 
> > > > see any errors in the logs. 
> > > > 
> > > > Anybody else have experience trying this? 
> > > > 
> > > > Thanks 
> > > > -Tom 
> > > > _______________________________________________ 
> > > > Gluster-users mailing list 
> > > > Gluster-users at gluster.org 
> > > > https://lists.gluster.org/mailman/listinfo/gluster-users 
> > 
> > _______________________________________________ 
> > Gluster-users mailing list 
> > Gluster-users at gluster.org 
> > https://lists.gluster.org/mailman/listinfo/gluster-users 
> -- 
> regards 
> Aravinda 
>
> _______________________________________________ 
> Gluster-users mailing list 
> Gluster-users at gluster.org 
> https://lists.gluster.org/mailman/listinfo/gluster-users 


More information about the Gluster-users mailing list