[Gluster-users] Rsync on bricks filesystem?
Yannick Perret
yannick.perret at liris.cnrs.fr
Wed Mar 30 09:27:30 UTC 2016
Hello,
we have some replica-2 volumes and it works fine at this time.
For some of the volumes I need to setup daily incremental blackups (on
an other filesystem, which don't needs to be on glusterfs).
As 'rsync' or similar is not very efficient on glusterfs volumes I tried
to use direct rsync beetween brick filesystem and an other filesystem
(both on the same storage server(s)). At this time using FUSE or NFS
access on storage server with rsync is resp. ~ 80x / 20x slower than
direct access (on a ~80Go volume with many small files).
Considering that the glusterfs volume is "clean" (no heal…) is there any
problems/drawbacks doing that? Or an other cleaner solution?
And what is the best way to check that a volume is "sane"? Parsing
output of 'gluster volume heal XX info' seems dangerous if output evolve
in the future, and I can't see specific return codes for that.
Thanks.
Regards,
--
Y.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3369 bytes
Desc: Signature cryptographique S/MIME
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160330/f44b5b4a/attachment.p7s>
More information about the Gluster-users
mailing list