<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Dear Shubhank,</p>
    <p>this small file performance appears to be slow on glusterfs
      usually. <br>
    </p>
    <p>Can you provide more details according to your setup? (zfs
      settings, bonding, tuned-adm profile, etc, ...)</p>
    <p><br>
    </p>
    <p>From a gluster point of view, setting
      performance.write-behind-window to 128MB increases performance. <br>
    </p>
    <p>I was able to hit the cpu limit using smallfile benchmark tool
      (available on github) and native glusterfs-client with that knob.<br>
    </p>
    <p><br>
    </p>
    <p>Furthermore, throughput increases if you increase the number of
      rsync processes ( -&gt; github -&gt; msrsync works well here).</p>
    <p><br>
    </p>
    <p>Regards,</p>
    <p>Felix<br>
    </p>
    <p><br>
    </p>
    <div class="moz-cite-prefix">On 06/03/2021 15:27, Shubhank Gaur
      wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAPX_D9fPFsLbuDxR8JL=kJjDNp1sg0jGw=VPesL+NpKE6hRbMA@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">Hello users,
        <div><br>
        </div>
        <div>I have started using gluster just a few weeks ago and I am
          rocking a Replicated-Distributed setup with arbiters (A) and
          SATA Volumes (V). I have around 6 volumes and 3 arbiters in
          this setup: </div>
        <div>V+V+A | V+V+A | V+V+A  </div>
        <div><br>
        </div>
        <div>All these volumes are spread across 3 different nodes, all
          of them being 1Gbit. Due to hardware limitations, SSD or
          10Gbit network is not available.  </div>
        <div><br>
        </div>
        <div>But even then, testing via iperf and normal rsync of files
          between servers, I am easily able to achieve 700Mbps~  </div>
        <div>[ ID] Interval           Transfer     Bandwidth       Retr
           Cwnd<br>
          [  4]   0.00-1.00   sec  49.9 MBytes   419 Mbits/sec   21  
           132 KBytes<br>
          [  4]   1.00-2.00   sec  80.0 MBytes   671 Mbits/sec    0  
           214 KBytes<br>
          [  4]   2.00-3.00   sec  87.0 MBytes   730 Mbits/sec    3  
           228 KBytes<br>
          [  4]   3.00-4.00   sec  91.6 MBytes   769 Mbits/sec   15  
           215 KBytes<br>
        </div>
        <div><br>
        </div>
        <div>But when rsyncing data from same server to another node
          with mounted glusterVolume, I am getting measly 50Mbps
          (7MBps).  </div>
        <div><br>
        </div>
        <div>All servers have 64GB Ram and their memory usage is around
          50% and CPU usage less than 10%.  </div>
        <div>All bricks are zfs volumes, no Raid setup or anything.  All
          volumes are direct hard disks formatted as ZFS (JBOD setup).</div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>My Gluster Vol Info</div>
        <div><br>
        </div>
        <div>gluster vol info<br>
          <br>
          Volume Name: glusterStore<br>
          Type: Distributed-Replicate<br>
          Volume ID: c7ac8094-f379-45fc-8cfd-f2937355e03d<br>
          Status: Started<br>
          Snapshot Count: 0<br>
          Number of Bricks: 3 x (2 + 1) = 9<br>
          Transport-type: tcp<br>
          Bricks:<br>
          Brick1: 62.0.0.1:/zpool1/proxmox<br>
          Brick2: 5.0.0.1:/zpool1/proxmox<br>
          Brick3: 62.0.0.1
          :/home/glusterArbiter (arbiter)<br>
          Brick4: 62.0.0.1
          :/zpool2/proxmox<br>
          Brick5: 5.0.0.1
          :/zpool2/proxmox<br>
          Brick6: 62.0.0.2:/home/glusterArbiter2 (arbiter)<br>
          Brick7: 62.0.0.2:/zpool/proxmox<br>
          Brick8: 5.0.0.1
          :/zpool3/proxmox<br>
          Brick9: 62.0.0.2:/home/glusterArbiter (arbiter)<br>
          Options Reconfigured:<br>
          performance.readdir-ahead: enable<br>
          cluster.rsync-hash-regex: none<br>
          client.event-threads: 16<br>
          server.event-threads: 16<br>
          network.ping-timeout: 5<br>
          performance.normal-prio-threads: 64<br>
          performance.high-prio-threads: 64<br>
          performance.io-thread-count: 64<br>
          performance.cache-size: 1GB<br>
          performance.read-ahead: off<br>
          performance.io-cache: off<br>
          performance.flush-behind: off<br>
          performance.quick-read: on<br>
          network.frame-timeout: 60<br>
          storage.batch-fsync-delay-usec: 0<br>
          server.allow-insecure: on<br>
          performance.stat-prefetch: off<br>
          cluster.lookup-optimize: on<br>
          performance.write-behind: on<br>
          cluster.granular-entry-heal: on<br>
          storage.fips-mode-rchecksum: on<br>
          transport.address-family: inet<br>
          nfs.disable: on<br>
          performance.client-io-threads: off<br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>Regards</div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <pre class="moz-quote-pre" wrap="">________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a class="moz-txt-link-freetext" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a>
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
    </blockquote>
  </body>
</html>