[Gluster-users] Backup of 48126852 files / 9.1 TB data
mathieu.chateau at lotp.fr
Sun Feb 14 10:11:33 UTC 2016
On gluster client and server, did you disable atime & co ?
Did you check for network bottleneck ? You are now using network twice:
-One way to read data through glusterfs,
-One way to push data remotely somewhere
I am using rsnapshot on my side, so it's doing hardlink to same files,
maybe it goes faster than true full copy.
Also problem raise with folder containing a lot of small files in
Which gluster version are you using ?
I also experience memory leakage from server doing rsync (glusterfs client
leak), but we are aware and patched has been pushed for version 3.7.8
2016-02-14 10:56 GMT+01:00 Nico Schottelius <
nico-gluster-users at schottelius.org>:
> Hello everyone,
> we have a 2 brick setup running on a raid6 with 19T storage.
> We are currently facing the problem that the backup (9.1 TB data in
> 48126852 files) is taking more than a week when being backed up by
> means of rsync (actually, ccollect).
> During backup the rsync process is continously in D state (expected),
> but cpu load is far from 100% and disk is also only about 15-30% busy.
> (this is snapshot from right now)
> I have two questions, the second one more important:
> a) Is there a good way to identify the bottleneck?
> b) Is it "safe" to backup data directly from the underlying
> filesystem instead of going via the glusterfs mount?
> The reason why I ask about (b) is that we used to backup from those
> servers *before* we switched to glusterfs within about a day and thus
> I suspect backing up from the xfs filesystem again should do the job.
> Thanks for any hints,
>  http://www.nico.schottelius.org/software/ccollect/
> Werde Teil des modernen Arbeitens im Glarnerland auf www.digitalglarus.ch!
> Lese Neuigkeiten auf Twitter: www.twitter.com/DigitalGlarus
> Diskutiere mit auf Facebook: www.facebook.com/digitalglarus
> Gluster-users mailing list
> Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users