<p dir="ltr">There are options that can help a little bit with the ls/find.</p>
<p dir="ltr">Still, many devs will need to know your settings, so the volume's info is very important.</p>
<p dir="ltr">Try the 'noatime,nodiratime' (if ZFS supports them).<br>
Also, as this is a new cluster you can try to setup XFS and verify if the issue is the same.<br>
RedHat provide an XFS options' calculator but it requires aby kind of subscription (even dev subscription is enough).<br></p>
<p dir="ltr">P.S.: As this is a new cluster, I would recommend you to switch to gluster v6.6 as v7 is too new (for my taste).</p>
<p dir="ltr">If the issue on XFS cannot be reproduced - the issue is either in the ZFS or in the kernel tunables (sysctl).</p>
<p dir="ltr">I'm not sure what is the most suitable I/O scheduler for ZFS, so you should check that too.<br><br></p>
<p dir="ltr">Edit: What kind of workload do you expect (size and number files, read:write ratio, etc).</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov<br>
</p>
<div class="quote">On Nov 8, 2019 10:32, Michael Rightmire <Michael.Rightmire@KIT.edu> wrote:<br type='attribution'><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>Hi Strahil, <br />
<br />
Thanks for the reply. See below. <br />
<br />
Also, as an aside, I tested by installing a single Cenots 7 machine with
the ZBOD, installed gluster and ZFSonLinux as recommended at..<br />
<a href="https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Gluster%20On%20ZFS/"></a><a href="https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Gluster%20On%20ZFS">https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Gluster%20On%20ZFS</a>/<br />
<br />
And created a gluster volume consisting of one brick made up of a local
ZFS raidz2, copied about 4 TB of data to it, and am having the same
issue. <br />
<br />
The biggest part of the issue is with things like "ls" and "find". IF I
read a single file, or write a single file it works great. But if I run
rsync (which does alot of listing, writing, renaming, etc) it is slow as
garbage. I.e. a find command that will finish in 30 seconds when run
directly on the underlying ZFS directory, takes about an hour. <br />
<br />
<br />
Strahil wrote on 08-Nov-19 05:39:<br />
<blockquote>
</blockquote></div>
<p dir="ltr">Hi Michael,</p>
<p dir="ltr">What is your 'gluster volume info <VOL> ' showing.</p>
I've been playing with the install (since it's a fresh machine) so I can't
give you verbatim output. However, it was showing two bricks, one on
each server, started, and apparently healthy. <br />
<blockquote>
<p dir="ltr">How much is your zpool full ? Usually when it gets too
full, the ZFS performance drops seriosly.</p>
</blockquote>
The zpool is only at about 30% usage. It's a new server setup.<br />
We have about 10TB of data on a 30TB volume (made up of two 30TB ZFS
raidz2 bricks, each residing on different servers, via a 10GB dedicated
Ethernet connection.) <br />
<blockquote>
<p dir="ltr">Try to rsync a file directly to one of the bricks, then
to the other brick (don't forget to remove the files after that, as
gluster will not know about them).</p>
</blockquote>
If I rsync manually, or scp a file directly to the zpool bricks (outside
of gluster) I get 30-100MBytes/s (depending on what I'm copying.)<br />
If I rsync THROUGH gluster (via the glusterfs mounts) I get 1 - 5MB/s<br />
<blockquote>
<p dir="ltr">What are your mounting options ? Usually
'noatime,nodiratime' are a good start.</p>
</blockquote>
I'll try these. Currently using ...<br />
(mounting TO serverA) serverA:/homes /glusterfs/homes glusterfs
defaults,_netdev 0 0<br />
<blockquote>
<p dir="ltr">Are you using ZFS provideed by Ubuntu packagees or
directly from ZOL project ?</p>
</blockquote>
ZFS provided by Ubuntu 18 repo...<br />
libzfs2linux/bionic-updates,now 0.7.5-1ubuntu16.6 amd64
[installed,automatic]<br />
zfs-dkms/bionic-updates,bionic-updates,now 0.7.5-1ubuntu16.6 all
[installed]<br />
zfs-zed/bionic-updates,now 0.7.5-1ubuntu16.6 amd64
[installed,automatic]<br />
zfsutils-linux/bionic-updates,now 0.7.5-1ubuntu16.6 amd64 [installed]<br />
<br />
Gluster provided by. "add-apt-repository ppa:gluster/glusterfs-5" ...<br />
glusterfs 5.10<br />
Repository revision: git://<a href="http://git.gluster.org/glusterfs.git">git.gluster.org/glusterfs.git</a><br />
<br />
<br />
<blockquote>
<p dir="ltr">Best Regards,<br />
Strahil Nikolov</p>
<div>On Nov 6, 2019 12:50, Michael Rightmire
<a href="mailto:Michael.Rightmire@KIT.edu"><Michael.Rightmire@KIT.edu></a> wrote:<br /><blockquote style="margin:0 0 0 0.8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
Hello list!<br />
<br />
I'm new to Glusterfs in general. We have chosen to use it as our
distributed file system on a new set of HA file servers. <br />
<br />
The setup is: <br />
2 SUPERMICRO SuperStorage Server 6049PE1CR36L with 24-4TB spinning disks
and NVMe for cache and slog.<br />
HBA not RAID card <br />
Ubuntu 18.04 server (on both systems)<br />
ZFS filestorage<br />
Glusterfs 5.10<br />
<br />
Step one was to install <span style="font-weight:bold">Ubuntu, ZFS,
and gluster</span>. <span style="font-weight:bold"></span>This all
went without issue. <br />
We have <span style="font-weight:bold">3 ZFS raidz2</span> identical <span style="font-weight:bold">on both servers</span><br />
We have three <span style="font-weight:bold">glusterfs mirrored
volumes </span>- 1 <span style="font-weight:bold">attached to each
raidz <</span></div></blockquote></div></blockquote></blockquote></div>