<p dir="ltr">Hi David,<br></p>
<p dir="ltr">It's difficult to find anything structured (but it's the same for Linux and other&nbsp; tech). I use Red Hat's doxumentation, guideds online (crosscheck the options with official documentation) and experience shared on the mailing list.</p>
<p dir="ltr">I don't see anything (iin /var/lib/gluster/groups) that will match your profile, but I think that you should try with performance.read-ahead&nbsp; and performance.readdir-ahead 'off' . I have found out a bug (didn't read&nbsp; the whole stuff) ,&nbsp; that might be interesting for you :</p>
<p dir="ltr"><a href="https://bugzilla.redhat.com/show_bug.cgi?id=1601166">https://bugzilla.redhat.com/show_bug.cgi?id=1601166</a></p>
<p dir="ltr">Also, Arbiter is very important in order to avoid split brain situations (but based on my experience , issues still can occur) and best the brick for the Arbiter to be an SSD as it needs to process the metadata as fast as possible. With v7, there&nbsp; is an option the client to have an Arbiter even in the cloud (remote arbiter) that is used only when 1 data brick is down.</p>
<p dir="ltr">Please report the issue with the cache&nbsp; - that should not be like that.</p>
<p dir="ltr">Are you using Jumbo frames&nbsp; (MTU 9000)?<br>
What is yoir brick's&nbsp; I/O scheduler&nbsp; ?</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov<br>
</p>
<div class="quote">On Jan 7, 2020 01:34, David Cunningham &lt;dcunningham@voisonics.com&gt; wrote:<br type='attribution'><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Hi Strahil,</div><div><br /></div><div>We may have had a heal since the GFS arbiter node wasn&#39;t accessible from the GFS clients, only from the other GFS servers. Unfortunately we haven&#39;t been able to produce the problem seen in production while testing so are unsure whether making the GFS arbiter node directly available to clients has fixed the issue.</div><div><br /></div><div>The load on GFS is mainly:</div><div>1. There are a small number of files around 5MB in size which are read often and change infrequently.</div><div>2. There are a large number of directories which are opened for reading to read the list of contents frequently.</div><div>3. There are a large number of new files around 5MB in size written frequently and read infrequently.</div><div><br /></div><div>We haven&#39;t touched the tuning options as we don&#39;t really feel qualified to tell what needs changed from the default. Do you know of any suitable guides to get started?</div><div><br /></div><div>For some reason performance.cache-size is reported as both 32MB and 128MB. Is it worth reporting even for version 5.6?</div><div><br /></div><div>Here is the &#34;gluster volume info&#34; taken on the first node. Note that the third node (the arbiter) is currently taken out of the cluster:</div><div>Volume Name: gvol0<br />Type: Replicate<br />Volume ID: fb5af69e-1c3e-4164-8b23-c1d7bec9b1b6<br />Status: Started<br />Snapshot Count: 0<br />Number of Bricks: 1 x 2 &#61; 2<br />Transport-type: tcp<br />Bricks:<br />Brick1: gfs1:/nodirectwritedata/gluster/gvol0<br />Brick2: gfs2:/nodirectwritedata/gluster/gvol0<br />Options Reconfigured:<br />diagnostics.client-log-level: INFO<br />performance.client-io-threads: off<br />nfs.disable: on<br />transport.address-family: inet<br /></div><div><br /></div><div>Thanks for your help and advice.</div><div><br /></div></div><br /><div class="elided-text"><div dir="ltr">On Sat, 28 Dec 2019 at 17:46, Strahil &lt;<a href="mailto:hunter86_bg&#64;yahoo.com">hunter86_bg&#64;yahoo.com</a>&gt; wrote:<br /></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb( 204 , 204 , 204 );padding-left:1ex"><p dir="ltr">Hi David,</p>
<p dir="ltr">It seems that I have misread your quorum options, so just ignore that from my previous e-mail.</p>
<p dir="ltr">Best Regards,<br />
Strahil Nikolov</p>
<div>On Dec 27, 2019 15:38, Strahil &lt;<a href="mailto:hunter86_bg&#64;yahoo.com">hunter86_bg&#64;yahoo.com</a>&gt; wrote:<br /><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb( 204 , 204 , 204 );padding-left:1ex"><p dir="ltr">Hi David,</p>
<p dir="ltr">Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.<br />
Also, the gluster client should remount in order to bump the gluster op-version.</p>
<p dir="ltr">What kind of workload do you have ?<br />
I&#39;m asking as there  are predefined (and recommended) settings located at /var/lib/gluster/groups .<br />
You can check the options for each group and cross-check the options meaning in the docs before  activating a setting.</p>
<p dir="ltr">I still have a vague feeling  that ,during that high-peak of network bandwidth, there was  a  heal  going on. Have you checked that ?</p>
<p dir="ltr">Also, sharding is very useful , when you work with large files and the heal is reduced to the size of the shard.</p>
<p dir="ltr">N.B.: Once sharding is enabled, DO NOT DISABLE it - as you will loose  your data.</p>
<p dir="ltr">Using GLUSTER v7.1 (soon on CentOS  &amp; Debian) allows using latest features  and optimizations while support from gluster Dev community is quite active.</p>
<p dir="ltr">P.S: I&#39;m wondering how &#39;performance.cache-size&#39; can both be 32 MB and 128 MB. Please double-check this (maybe I&#39;m reading it wrong on my smartphone) and if needed raise a bug on <a href="http://bugzilla.redhat.com">bugzilla.redhat.com</a> </p>
<p dir="ltr">P.S2: Please  provide  &#39;gluster volume info&#39; as &#39;cluster.quorum-type&#39; -&gt;  &#39;none&#39; is not normal for replicated volumes (arbiters are using in replica volumes)</p>
<p dir="ltr">According to the dooutput (otps://<a href="http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum">docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum</a>/) :</p>
<p dir="ltr"><i><b>Note:</b></i><i> Enabling the arbiter feature </i><b><i>automatically</i></b><i> configures</i> <i>client-quorum to &#39;auto&#39;. This setting is </i><i><b>not</b></i><i> to be changed.</i><br /></p>
<p dir="ltr">Here is my output (Hyperconverged Virtualization Cluster -&gt; oVirt):<br />
# gluster volume info engine |  grep quorum<br />
cluster.quorum-type: auto<br />
cluster.server-quorum-type: server</p>
<p dir="ltr">Changing quorum is more &#39;riskier&#39; than other options, so you need to take necessary measures.  I think , we all  know what will happen , if the cluster is out of quorum and you change the quorum settings to more stringent ones :D<br /></p>
<p dir="ltr">P.S3: If you decide to reset  your gluster volume to the defaults, you can create a new volume (same type as current one), the  get the options for that volume and put them in a file and then bulk deploy via &#39;gluster volume set &lt;Original Volume&gt;   group custom-group&#39; ,  where  the file is located on every gluster  server in the &#39;/var/lib/gluster/groups&#39; directory.<br />
Last ,  get rid of the sample volume.<br /></p>
<p dir="ltr">Best Regards,<br />
Strahil Nikolov</p>
<div>On Dec 27, 2019 03:22, David Cunningham &lt;<a href="mailto:dcunningham&#64;voisonics.com">dcunningham&#64;voisonics.com</a>&gt; wrote:<br /><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb( 204 , 204 , 204 );padding-left:1ex"><div dir="ltr"><div>Hi Strahil,</div><div><br /></div><div>Our volume options are as below. Thanks for the suggestion to upgrade to version 6 or 7. We could do that be simply removing the current installation and installing the new one (since it&#39;s not live right now). We might have to convince the customer that it&#39;s likely to succeed though, as at the moment I think they believe that GFS is not going to work for them.</div><div><br /></div><div>Option                                  Value                                   <br />------                                  -----                                   <br />cluster.lookup-unhashed                 on                                      <br />cluster.lookup-optimize                 on                                      <br />cluster.min-free-disk                   10%                                     <br />cluster.min-free-inodes                 5%                                      <br />cluster.rebalance-stats                 off                                     <br />cluster.subvols-per-directory           (null)                                  <br />cluster.readdir-optimize                off                                     <br />cluster.rsync-hash-regex                (null)                                  <br />cluster.extra-hash-regex                (null)                                  <br />cluster.dht-xattr-name                  trusted.glusterfs.dht                   <br />cluster.randomize-hash-range-by-gfid    off                                     <br />cluster.rebal-throttle                  normal                                  <br />cluster.lock-migration                  off                                     <br />cluster.force-migration                 off                                     <br />cluster.local-volume-name               (null)                                  <br />cluster.weighted-rebalance              on                                      <br />cluster.switch-pattern                  (null)                                  <br />cluster.entry-change-log                on                                      <br />cluster.read-subvolume                  (null)                                  <br />cluster.read-subvolume-index            -1                                      <br />cluster.read-hash-mode                  1                                       <br />cluster.background-self-heal-count      8                                       <br />cluster.metadata-self-heal              on                                      <br />cluster.data-self-heal                  on                                      <br />cluster.entry-self-heal                 on                                      <br />cluster.self-heal-daemon                on                                      <br />cluster.heal-timeout                    600                                     <br />cluster.self-heal-window-size           1                                       <br />cluster.data-change-log                 on                                      <br />cluster.metadata-change-log             on                                      <br />cluster.data-self-heal-algorithm        (null)                                  <br />cluster.eager-lock                      on                                      <br />disperse.eager-lock                     on                                      <br />disperse.other-eager-lock               on                                      <br />disperse.eager-lock-timeout             1                                       <br />disperse.other-eager-lock-timeout       1                                       <br />cluster.quorum-type                     none                                    <br />cluster.quorum-count                    (null)                                  <br />cluster.choose-local                    true                                    <br />cluster.self-heal-readdir-size          1KB                                     <br />cluster.post-op-delay-secs              1                                       <br />cluster.ensure-durability               on                                      <br />cluster.consistent-metadata             no                                      <br />cluster.heal-wait-queue-length          128                                     <br />cluster.favorite-child-policy           none                                    <br />cluster.full-lock                       yes                                     <br />cluster.stripe-block-size               128KB                                   <br />cluster.stripe-coalesce                 true                                    <br />diagnostics.latency-measurement         off                                     <br />diagnostics.dump-fd-stats               off                                     <br />diagnostics.count-fop-hits              off                                     <br />diagnostics.brick-log-level             INFO                                    <br />diagnostics.client-log-level            INFO                                    <br />diagnostics.brick-sys-log-level         CRITICAL                                <br />diagnostics.client-sys-log-level        CRITICAL                                <br />diagnostics.brick-logger                (null)                                  <br />diagnostics.client-logger               (null)                                  <br />diagnostics.brick-log-format            (null)                                  <br />diagnostics.client-log-format           (null)                                  <br />diagnostics.brick-log-buf-size          5                                       <br />diagnostics.client-log-buf-size         5                                       <br />diagnostics.brick-log-flush-timeout     120                                     <br />diagnostics.client-log-flush-timeout    120                                     <br />diagnostics.stats-dump-interval         0                                       <br />diagnostics.fop-sample-interval         0                                       <br />diagnostics.stats-dump-format           json                                    <br />diagnostics.fop-sample-buf-size         65535                                   <br />diagnostics.stats-dnscache-ttl-sec      86400                                   <br />performance.cache-max-file-size         0                                       <br />performance.cache-min-file-size         0                                       <br />performance.cache-refresh-timeout       1                                       <br />performance.cache-priority                                                      <br />performance.cache-size                  32MB                                    <br /><a href="http://performance.io">performance.io</a>-thread-count             16                                      <br />performance.high-prio-threads           16                                      <br />performance.normal-prio-threads         16                                      <br />performance.low-prio-threads            16                                      <br />performance.least-prio-threads          1                                       <br />performance.enable-least-priority       on                                      <br />performance.iot-watchdog-secs           (null)                                  <br />performance.iot-cleanup-disconnected-reqsoff                                     <br />performance.iot-pass-through            false                                   <br /><a href="http://performance.io">performance.io</a>-cache-pass-through       false                                   <br />performance.cache-size                  128MB                                   <br />performance.qr-cache-timeout            1                                       <br />performance.cache-invalidation          false                                   <br />performance.ctime-invalidation          false                                   <br />performance.flush-behind                on                                      <br />performance.nfs.flush-behind            on                                      <br />performance.write-behind-window-size    1MB                                     <br />performance.resync-failed-syncs-after-fsyncoff                                     <br />performance.nfs.write-behind-window-size1MB                                     <br />performance.strict-o-direct             off                                     <br />performance.nfs.strict-o-direct         off                                     <br />performance.strict-write-ordering       off                                     <br />performance.nfs.strict-write-ordering   off                                     <br />performance.write-behind-trickling-writeson                                      <br />performance.aggregate-size              128KB                                   <br />performance.nfs.write-behind-trickling-writeson                                      <br />performance.lazy-open                   yes                                     <br />performance.read-after-open             yes                                     <br />performance.open-behind-pass-through    false                                   <br />performance.read-ahead-page-count       4                                       <br />performance.read-ahead-pass-through     false                                   <br />performance.readdir-ahead-pass-through  false                                   <br /><a href="http://performance.md">performance.md</a>-cache-pass-through       false                                   <br /><a href="http://performance.md">performance.md</a>-cache-timeout            1                                       <br />performance.cache-swift-metadata        true                                    <br />performance.cache-samba-metadata        false                                   <br />performance.cache-capability-xattrs     true                                    <br />performance.cache-ima-xattrs            true                                    <br /><a href="http://performance.md">performance.md</a>-cache-statfs             off                                     <br />performance.xattr-cache-list                                                    <br /><a href="http://performance.nl">performance.nl</a>-cache-pass-through       false                                   <br />features.encryption                     off                                     <br />encryption.master-key                   (null)                                  <br />encryption.data-key-size                256                                     <br />encryption.block-size                   4096                                    <br />network.frame-timeout                   1800                                    <br />network.ping-timeout                    42                                      <br />network.tcp-window-size                 (null)                                  <br />network.remote-dio                      disable                                 <br />client.event-threads                    2                                       <br />client.tcp-user-timeout                 0                                       <br />client.keepalive-time                   20                                      <br />client.keepalive-interval               2                                       <br />client.keepalive-count                  9                                       <br />network.tcp-window-size                 (null)                                  <br />network.inode-lru-limit                 16384                                   <br />auth.allow                              *                                       <br />auth.reject                             (null)                                  <br />transport.keepalive                     1                                       <br />server.allow-insecure                   on                                      <br />server.root-squash                      off                                     <br />server.anonuid                          65534                                   <br />server.anongid                          65534                                   <br />server.statedump-path                   /var/run/gluster                        <br />server.outstanding-rpc-limit            64                                      <br />server.ssl                              (null)                                  <br />auth.ssl-allow                          *                                       <br />server.manage-gids                      off                                     <br />server.dynamic-auth                     on                                      <br />client.send-gids                        on                                      <br />server.gid-timeout                      300                                     <br />server.own-thread                       (null)                                  <br />server.event-threads                    1                                       <br />server.tcp-user-timeout                 0                                       <br />server.keepalive-time                   20                                      <br />server.keepalive-interval               2                                       <br />server.keepalive-count                  9                                       <br />transport.listen-backlog                1024                                    <br />ssl.own-cert                            (null)                                  <br />ssl.private-key                         (null)                                  <br /><a href="http://ssl.ca">ssl.ca</a>-list                             (null)                                  <br />ssl.crl-path                            (null)                                  <br />ssl.certificate-depth                   (null)                                  <br />ssl.cipher-list                         (null)                                  <br />ssl.dh-param                            (null)                                  <br /><a href="http://ssl.ec">ssl.ec</a>-curve                            (null)                                  <br />transport.address-family                inet                                    <br />performance.write-behind                on                                      <br />performance.read-ahead                  on                                      <br />performance.readdir-ahead               on                                      <br /><a href="http://performance.io">performance.io</a>-cache                    on                                      <br />performance.quick-read                  on                                      <br />performance.open-behind                 on                                      <br /><a href="http://performance.nl">performance.nl</a>-cache                    off                                     <br />performance.stat-prefetch               on                                      <br />performance.client-io-threads           off                                     <br />performance.nfs.write-behind            on                                      <br />performance.nfs.read-ahead              off                                     <br /><a href="http://performance.nfs.io">performance.nfs.io</a>-cache                off                                     <br />performance.nfs.quick-read              off                                     <br />performance.nfs.stat-prefetch           off                                     <br /><a href="http://performance.nfs.io">performance.nfs.io</a>-threads              off                                     <br />performance.force-readdirp              true                                    <br />performance.cache-invalidation          false                                   <br />features.uss                            off                                     <br />features.snapshot-directory             .snaps                                  <br />features.show-snapshot-directory        off                                     <br />features.tag-namespaces                 off                                     <br />network.compression                     off                                     <br />network.compression.window-size         -15                                     <br />network.compression.mem-level           8                                       <br />network.compression.min-size            0                                       <br />network.compression.compression-level   -1                                      <br />network.compression.debug               false                                   <br />features.default-soft-limit             80%                                     <br />features.soft-timeout                   60                                      <br />features.hard-timeout                   5                                       <br />features.alert-time                     86400                                   <br />features.quota-deem-statfs              off                                     <br />geo-replication.indexing                off                                     <br />geo-replication.indexing                off                                     <br />geo-replication.ignore-pid-check        off                                     <br />geo-replication.ignore-pid-check        off                                     <br />features.quota                          off                                     <br />features.inode-quota                    off                                     <br />features.bitrot                         disable                                 <br />debug.trace                             off                                     <br />debug.log-history                       no                                      <br />debug.log-file                          no                                      <br />debug.exclude-ops                       (null)                                  <br />debug.include-ops                       (null)                                  <br />debug.error-gen                         off                                     <br />debug.error-failure                     (null)                                  <br />debug.error-number                      (null)                                  <br />debug.random-failure                    off                                     <br />debug.error-fops                        (null)                                  <br />nfs.disable                             on                                      <br />features.read-only                      off                                     <br />features.worm                           off                                     <br />features.worm-file-level                off                                     <br />features.worm-files-deletable           on                                      <br />features.default-retention-period       120                                     <br />features.retention-mode                 relax                                   <br />features.auto-commit-period             180                                     <br />storage.linux-aio                       off                                     <br />storage.batch-fsync-mode                reverse-fsync                           <br />storage.batch-fsync-delay-usec          0                                       <br />storage.owner-uid                       -1                                      <br />storage.owner-gid                       -1                                      <br />storage.node-uuid-pathinfo              off                                     <br />storage.health-check-interval           30                                      <br />storage.build-pgfid                     off                                     <br />storage.gfid2path                       on                                      <br />storage.gfid2path-separator             :                                       <br />storage.reserve                         1                                       <br />storage.health-check-timeout            10                                      <br />storage.fips-mode-rchecksum             off                                     <br />storage.force-create-mode               0000                                    <br />storage.force-directory-mode            0000                                    <br />storage.create-mask                     0777                                    <br />storage.create-directory-mask           0777                                    <br />storage.max-hardlinks                   100                                     <br />storage.ctime                           off                                     <br /><a href="http://storage.bd">storage.bd</a>-aio                          off                                     <br />config.gfproxyd                         off                                     <br />cluster.server-quorum-type              off                                     <br />cluster.server-quorum-ratio             0                                       <br />changelog.changelog                     off                                     <br />changelog.changelog-dir                 {{ brick.path }}/.glusterfs/changelogs  <br />changelog.encoding                      ascii                                   <br />changelog.rollover-time                 15                                      <br />changelog.fsync-interval                5                                       <br />changelog.changelog-barrier-timeout     120                                     <br />changelog.capture-del-path              off                                     <br />features.barrier                        disable                                 <br />features.barrier-timeout                120                                     <br />features.trash                          off                                     <br />features.trash-dir                      .trashcan                               <br />features.trash-eliminate-path           (null)                                  <br />features.trash-max-filesize             5MB                                     <br />features.trash-internal-op              off                                     <br />cluster.enable-shared-storage           disable                                 <br />cluster.write-freq-threshold            0                                       <br />cluster.read-freq-threshold             0                                       <br />cluster.tier-pause                      off                                     <br />cluster.tier-promote-frequency          120                                     <br />cluster.tier-demote-frequency           3600                                    <br />cluster.watermark-hi                    90                                      <br />cluster.watermark-low                   75                                      <br />cluster.tier-mode                       cache                                   <br />cluster.tier-max-promote-file-size      0                                       <br />cluster.tier-max-mb                     4000                                    <br />cluster.tier-max-files                  10000                                   <br />cluster.tier-query-limit                100                                     <br />cluster.tier-compact                    on                                      <br />cluster.tier-hot-compact-frequency      604800                                  <br />cluster.tier-cold-compact-frequency     604800                                  <br />features.ctr-enabled                    off                                     <br />features.record-counters                off                                     <br />features.ctr-record-metadata-heat       off                                     <br />features.ctr_link_consistency           off                                     <br />features.ctr_lookupheal_link_timeout    300                                     <br />features.ctr_lookupheal_inode_timeout   300                                     <br />features.ctr-sql-db-cachesize           12500                                   <br />features.ctr-sql-db-wal-autocheckpoint  25000                                   <br />features.selinux                        on                                      <br />locks.trace                             off                                     <br />locks.mandatory-locking                 off                                     <br />cluster.disperse-self-heal-daemon       enable                                  <br />cluster.quorum-reads                    no                                      <br />client.bind-insecure                    (null)                                  <br />features.shard                          off                                     <br />features.shard-block-size               64MB                                    <br />features.shard-lru-limit                16384                                   <br />features.shard-deletion-rate            100                                     <br />features.scrub-throttle                 lazy                                    <br />features.scrub-freq                     biweekly                                <br />features.scrub                          false                                   <br />features.expiry-time                    120                                     <br />features.cache-invalidation             off                                     <br />features.cache-invalidation-timeout     60                                      <br />features.leases                         off                                     <br />features.lease-lock-recall-timeout      60                                      <br />disperse.background-heals               8                                       <br />disperse.heal-wait-qlength              128                                     <br />cluster.heal-timeout                    600                                     <br />dht.force-readdirp                      on                                      <br />disperse.read-policy                    gfid-hash                               <br />cluster.shd-max-threads                 1                                       <br />cluster.shd-wait-qlength                1024                                    <br />cluster.locking-scheme                  full                                    <br />cluster.granular-entry-heal             no                                      <br />features.locks-revocation-secs          0                                       <br />features.locks-revocation-clear-all     false                                   <br />features.locks-revocation-max-blocked   0                                       <br />features.locks-monkey-unlocking         false                                   <br />features.locks-notify-contention        no                                      <br />features.locks-notify-contention-delay  5                                       <br />disperse.shd-max-threads                1                                       <br />disperse.shd-wait-qlength               1024                                    <br />disperse.cpu-extensions                 auto                                    <br />disperse.self-heal-window-size          1                                       <br />cluster.use-compound-fops               off                                     <br />performance.parallel-readdir            off                                     <br />performance.rda-request-size            131072                                  <br />performance.rda-low-wmark               4096                                    <br />performance.rda-high-wmark              128KB                                   <br />performance.rda-cache-limit             10MB                                    <br /><a href="http://performance.nl">performance.nl</a>-cache-positive-entry     false                                   <br /><a href="http://performance.nl">performance.nl</a>-cache-limit              10MB                                    <br /><a href="http://performance.nl">performance.nl</a>-cache-timeout            60                                      <br />cluster.brick-multiplex                 off                                     <br />cluster.max-bricks-per-process          0                                       <br />disperse.optimistic-change-log          on                                      <br />disperse.stripe-cache                   4                                       <br />cluster.halo-enabled                    False                                   <br />cluster.halo-shd-max-latency            99999                                   <br />cluster.halo-nfsd-max-latency           5                                       <br />cluster.halo-max-latency                5                                       <br />cluster.halo-max-replicas               99999                                   <br />cluster.halo-min-replicas               2                                       <br />cluster.daemon-log-level                INFO                                    <br />debug.delay-gen                         off                                     <br />delay-gen.delay-percentage              10%                                     <br />delay-gen.delay-duration                100000                                  <br />delay-gen.enable                                                                <br />disperse.parallel-writes                on                                      <br />features.sdfs                           on                                      <br />features.cloudsync                      off                                     <br />features.utime                          off                                     <br />ctime.noatime                           on                                      <br />feature.cloudsync-storetype             (null)                                  <br /></div><div><br /></div><div>Thanks again.</div><div><br /></div></div><br /><div><div dir="ltr">On Wed, 25 Dec 2019 at 05:51, Strahil &lt;<a href="mailto:hunter86_bg&#64;yahoo.com">hunter86_bg&#64;yahoo.com</a>&gt; wrote:<br /></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb( 204 , 204 , 204 );padding-left:1ex"><p dir="ltr">Hi David,</p>
<p dir="ltr">On Dec 24, 2019 02:47, David Cunningham &lt;<a href="mailto:dcunningham&#64;voisonics.com">dcunningham&#64;voisonics.com</a>&gt; wrote:<br />
&gt;<br />
&gt; Hello,<br />
&gt;<br />
&gt; In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that&#39;s because the 3rd node that wasn&#39;t accessible from the client before was the arbiter node?<br />
It makes sense, as no data is being generated towards the arbiter.<br />
&gt; Presumably we shouldn&#39;t have an arbiter node listed under backupvolfile-server when mounting the filesystem? Since it doesn&#39;t store all the data surely it can&#39;t be used to serve the data.</p>
<p dir="ltr">I have my arbiter defined as last backup and no issues so far. At least the admin can easily identify the bricks from the mount options.</p>
<p dir="ltr">&gt; We did have direct-io-mode&#61;disable already as well, so that wasn&#39;t a factor in the performance problems.</p>
<p dir="ltr">Have you checked if the client vedsion ia not too old.<br />
Also you can check the cluster&#39;s  operation cersion:<br />
# gluster volume get all cluster.max-op-version<br />
# gluster volume get all cluster.op-version</p>
<p dir="ltr">Cluster&#39;s op version should be at max-op-version.</p>
<p dir="ltr">In my mind come 2  options:<br />
A) Upgrade to latest GLUSTER v6 or even v7 ( I know it won&#39;t be easy) and then set the op version to highest possible.<br />
# gluster volume get all cluster.max-op-version<br />
# gluster volume get all cluster.op-version</p>
<p dir="ltr">B)  Deploy a NFS Ganesha server and connect the client over NFS v4.2 (and control the parallel connections from Ganesha).</p>
<p dir="ltr">Can you provide your  Gluster volume&#39;s  options?<br />
&#39;gluster volume get &lt;VOLNAME&gt;  all&#39;</p>
<p dir="ltr">&gt; Thanks again for any advice.<br />
&gt;<br />
&gt;<br />
&gt;<br />
&gt; On Mon, 23 Dec 2019 at 13:09, David Cunningham &lt;<a href="mailto:dcunningham&#64;voisonics.com">dcunningham&#64;voisonics.com</a>&gt; wrote:<br />
&gt;&gt;<br />
&gt;&gt; Hi Strahil,<br />
&gt;&gt;<br />
&gt;&gt; Thanks for that. We do have one backup server specified, but will add the second backup as well.<br />
&gt;&gt;<br />
&gt;&gt;<br />
&gt;&gt; On Sat, 21 Dec 2019 at 11:26, Strahil &lt;<a href="mailto:hunter86_bg&#64;yahoo.com">hunter86_bg&#64;yahoo.com</a>&gt; wrote:<br />
&gt;&gt;&gt;<br />
&gt;&gt;&gt; Hi David,<br />
&gt;&gt;&gt;<br />
&gt;&gt;&gt; Also consider using the  mount option to specify backup server via &#39;backupvolfile-server&#61;server2:server3&#39; (you can define more but I don&#39;t thing replica volumes  greater that 3 are usefull (maybe  in some special cases).<br />
&gt;&gt;&gt;<br />
&gt;&gt;&gt; In such way, when the primary is lost, your client can reach a backup one without disruption.<br />
&gt;&gt;&gt;<br />
&gt;&gt;&gt; P.S.: Client may &#39;hang&#39; - if the primary server got rebooted ungracefully - as the communication must timeout before FUSE addresses the next server. There is a special script for  killing gluster processes in &#39;/usr/share/gluster/scripts&#39; which can be used  for  setting up a systemd service to do that for you on shutdown.<br />
&gt;&gt;&gt;<br />
&gt;&gt;&gt; Best Regards,<br />
&gt;&gt;&gt; Strahil Nikolov<br />
&gt;&gt;&gt;<br />
&gt;&gt;&gt; On Dec 20, 2019 23:49, David Cunningham &lt;<a href="mailto:dcunningham&#64;voisonics.com">dcunningham&#64;voisonics.com</a>&gt; wrote:<br />
&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt; Hi Stahil,<br />
&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt; Ah, that is an important point. One of the nodes is not accessible from the client, and we assumed that it only needed to reach the GFS node that was mounted so didn&#39;t think anything of it.<br />
&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt; We will try making all nodes accessible, as well as &#34;direct-io-mode&#61;disable&#34;.<br />
&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt; Thank you.<br />
&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt; On Sat, 21 Dec 2019 at 10:29, Strahil Nikolov &lt;<a href="mailto:hunter86_bg&#64;yahoo.com">hunter86_bg&#64;yahoo.com</a>&gt; wrote:<br />
&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt; Actually I haven&#39;t clarified myself.<br />
&gt;&gt;&gt;&gt;&gt; FUSE mounts on the client side is connecting directly to all bricks consisted of the volume.<br />
&gt;&gt;&gt;&gt;&gt; If for some reason (bad routing, firewall blocked) there could be cases where the client can reach 2 out of 3 bricks and this can constantly cause healing to happen (as one of the bricks is never updated) which will degrade the performance and cause excessive network usage.<br />
&gt;&gt;&gt;&gt;&gt; As your attachment is from one of the gluster nodes, this could be the case.<br />
&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt; Best Regards,<br />
&gt;&gt;&gt;&gt;&gt; Strahil Nikolov<br />
&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt; В петък, 20 декември 2019 г., 01:49:56 ч. Гринуич&#43;2, David Cunningham &lt;<a href="mailto:dcunningham&#64;voisonics.com">dcunningham&#64;voisonics.com</a>&gt; написа:<br />
&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt; Hi Strahil,<br />
&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt; The chart attached to my original email is taken from the GFS server.<br />
&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt; I&#39;m not sure what you mean by accessing all bricks simultaneously. We&#39;ve mounted it from the client like this:<br />
&gt;&gt;&gt;&gt;&gt; gfs1:/gvol0 /mnt/glusterfs/ glusterfs defaults,direct-io-mode&#61;disable,_netdev,backupvolfile-server&#61;gfs2,fetch-attempts&#61;10 0 0<br />
&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt; Should we do something different to access all bricks simultaneously?<br />
&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt; Thanks for your help!<br />
&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt; On Fri, 20 Dec 2019 at 11:47, Strahil Nikolov &lt;<a href="mailto:hunter86_bg&#64;yahoo.com">hunter86_bg&#64;yahoo.com</a>&gt; wrote:<br />
&gt;&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt;&gt; I&#39;m not sure if you did measure the traffic from client side (tcpdump on a client machine) or from Server side.<br />
&gt;&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt;&gt; In both cases , please verify that the client accesses all bricks simultaneously, as this can cause unnecessary heals.<br />
&gt;&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt;&gt; Have you thought about upgrading to v6? There are some enhancements in v6 which could be beneficial.<br />
&gt;&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt;&gt; Yet, it is indeed strange that so much traffic is generated with FUSE.<br />
&gt;&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt;&gt; Another aproach is to test with NFSGanesha which suports pNFS and can natively speak with Gluster, which cant bring you closer to the previous setup and also provide some extra performance.<br />
&gt;&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt;&gt; Best Regards,<br />
&gt;&gt;&gt;&gt;&gt;&gt; Strahil Nikolov<br />
&gt;&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;&gt;&gt;&gt;&gt;<br />
&gt;&gt;<br />
&gt;&gt;<br />
&gt;&gt; -- <br />
&gt;&gt; David Cunningham, Voisonics Limited<br />
&gt;&gt;<a href="http://voisonics.com"> http://voisonics.com</a>/<br />
&gt;&gt; USA: &#43;1 213 221 1092<br />
&gt;&gt; New Zealand: &#43;64 (0)28 2558 3782<br />
&gt;<br />
&gt;<br />
&gt;<br />
&gt; -- <br />
&gt; David Cunningham, Voisonics Limited<br />
&gt;<a href="http://voisonics.com"> http://voisonics.com</a>/<br />
&gt; USA: &#43;1 213 221 1092<br />
&gt; New Zealand: &#43;64 (0)28 2558 3782</p>
<p dir="ltr">Best Regards,<br />
Strahil Nikolov</p>
</blockquote></div><br clear="all" /><br />-- <br /><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>David Cunningham, Voisonics Limited<br /><a href="http://voisonics.com/">http://voisonics.com/</a><br />USA: &#43;1 213 221 1092<br />New Zealand: &#43;64 (0)28 2558 3782</div></div></div></div></div></div></div></div></div></div></div>
</blockquote></div></blockquote></div></blockquote></div><br clear="all" /><br />-- <br /><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>David Cunningham, Voisonics Limited<br /><a href="http://voisonics.com/">http://voisonics.com/</a><br />USA: &#43;1 213 221 1092<br />New Zealand: &#43;64 (0)28 2558 3782</div></div></div></div></div></div></div></div></div></div></div>
</blockquote></div>