[Gluster-users] Rsync in place of heal after brick failure

Jim Kinney jim.kinney at gmail.com
Mon Apr 1 20:23:22 UTC 2019


Nice! I didn't use -H -X and the system had to do some clean up.
I'll add this in my next migration progress as I move 120TB to new hard
drives.
On Mon, 2019-04-01 at 14:27 -0400, Tom Fite wrote:
> Hi all,
> I have a very large (65 TB) brick in a replica 2 volume that needs to
> be re-copied from scratch. A heal will take a very long time with
> performance degradation on the volume so I investigated using rsync
> to do the brunt of the work.
> 
> The command:
> 
> rsync -av -H -X --numeric-ids --progress server1:/data/brick1/gv0
> /data/brick1/
> 
> Running with -H assures that the hard links in .glusterfs are
> preserved, and -X preserves all of gluster's extended attributes.
> 
> I've tested this on my test environment as follows:
> 
> 1. Stop glusterd and kill procs
> 2. Move brick volume to backup dir
> 3. Run rsync
> 4. Start glusterd
> 5. Observe gluster status
> 
> All appears to be working correctly. Gluster status reports all
> bricks online, all data is accessible in the volume, and I don't see
> any errors in the logs.
> 
> Anybody else have experience trying this?
> 
> Thanks
> -Tom
> 
> _______________________________________________Gluster-users mailing 
> listGluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-- 
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain

http://heretothereideas.blogspot.com/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190401/8a4f165e/attachment.html>


More information about the Gluster-users mailing list