<p dir="ltr">Hi Michael,</p>
<p dir="ltr">What is your 'gluster volume info <VOL> ' showing.</p>
<p dir="ltr">How much is your zpool full ? Usually when it gets too full, the ZFS performance drops seriosly.</p>
<p dir="ltr">Try to rsync a file directly to one of the bricks, then to the other brick (don't forget to remove the files after that, as gluster will not know about them).</p>
<p dir="ltr">What are your mounting options ? Usually 'noatime,nodiratime' are a good start.</p>
<p dir="ltr">Are you using ZFS provideed by Ubuntu packagees or directly from ZOL project ?</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
<div class="quote">On Nov 6, 2019 12:50, Michael Rightmire <Michael.Rightmire@KIT.edu> wrote:<br type='attribution'><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
Hello list!<br />
<br />
I'm new to Glusterfs in general. We have chosen to use it as our
distributed file system on a new set of HA file servers. <br />
<br />
The setup is: <br />
2 SUPERMICRO SuperStorage Server 6049PE1CR36L with 24-4TB spinning disks
and NVMe for cache and slog.<br />
HBA not RAID card <br />
Ubuntu 18.04 server (on both systems)<br />
ZFS filestorage<br />
Glusterfs 5.10<br />
<br />
Step one was to install <span style="font-weight:bold">Ubuntu, ZFS,
and gluster</span>. <span style="font-weight:bold"></span>This all
went without issue. <br />
We have <span style="font-weight:bold">3 ZFS raidz2</span> identical <span style="font-weight:bold">on both servers</span><br />
We have three <span style="font-weight:bold">glusterfs mirrored
volumes </span>- 1 <span style="font-weight:bold">attached to each
raidz </span>on each server. I.e.<br />
<br />
And <span style="font-weight:bold">mounted the gluster volumes as </span>(for
example) "<span style="font-weight:bold">/glusterfs/homes ->
/zpool/homes". </span>I.e.<span style="font-weight:bold"> <br />
gluster volume create homes replica 2 transport tcp server1:/zpool</span><span style="font-weight:bold"><span style="font-weight:bold">-homes</span>/homes
server2:/zpool-homes/homes force<br />
</span>(on server1) <span style="font-weight:bold">server1</span><span style="font-weight:bold">:/homes 44729413504 16032705152
28696708352 36% /glusterfs/homes<br />
</span><span style="font-weight:bold"><br />
<span style="text-decoration:underline">The problem is, the
performance has deteriorated </span><span style="text-decoration:underline">terribly</span><span style="text-decoration:underline">. </span></span><span style="text-decoration:underline"><span style="font-weight:bold"><br />
</span></span>We needed to <span style="font-weight:bold">copy all
of our data</span> from the old server to the new glusterfs volumes
(appx. <span style="font-weight:bold">60TB</span>).<br />
We decided to do this with <span style="font-weight:bold">multiple
rsync commands </span>(like 400 simultanous rsyncs)<br />
The copy went well for the first 4 days, with an average across all
rsyncs of <span style="font-weight:bold"> 150-200 MBytes per second. <br />
</span>Then, suddenly, on the fourth day, it dropped to about <span style="font-weight:bold">50 MBytes</span><span style="font-weight:bold">/s</span>.<br />
Then, by the end of the day, down to <span style="font-weight:bold">~5MBytes/s
(five)</span>.<br />
I've stopped the rsyncs, and <span style="font-weight:bold">I</span><span style="font-weight:bold"> can still copy an individual file across to
the glusterfs shared directory at 100MB/s.</span> <br />
But actions such as <span style="font-weight:bold">"ls -la" or "find"
take forever!</span><span style="font-weight:bold"></span>
<div><br />
<span style="font-weight:bold">Are there obvious flaws in my setup
to correct?</span><span style="font-weight:bold"><br />
</span><span style="font-weight:bold">How can I better
troubleshoot this?</span><span style="font-weight:bold"><br />
</span><br />
Thanks!<br />
-- <br />
</div></div>
<p>Mike
</p>
<p> </p>
</blockquote></div>