<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <font face="Futura Bk BT">Dismiss my first question: you have SAS
      12Gbps SSDs  Sorry!</font><br>
    <br>
    <div class="moz-cite-prefix">El 12/12/23 a les 19:52, Ramon Selga ha
      escrit:<br>
    </div>
    <blockquote type="cite"
      cite="mid:b9a5aa2e-35a8-4a0c-a577-bd72e6254a43@gmail.com">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <font face="Futura Bk BT">May ask you which kind of disks you have
        in this setup? rotational, ssd SAS/SATA, nvme?<br>
        <br>
        Is there a RAID controller with writeback caching?<br>
        <br>
        It seems to me your fio test on local brick has a unclear result
        due to some caching.<br>
        <br>
        Try something like (you can consider to increase test file size
        depending of your caching memory) :<br>
        <br>
        fio --size=16G --name=test --filename=/gluster/data/brick/wow
        --bs=1M --nrfiles=1 --direct=1 --sync=0 --randrepeat=0
        --rw=write --refill_buffers --end_fsync=1 --iodepth=200
        --ioengine=libaio<br>
        <br>
      </font>Also remember a replica 3 arbiter 1 volume writes
      synchronously to two data bricks, halving throughput of your
      network backend.<br>
      <br>
      Try similar fio on gluster mount but I hardly see more than
      300MB/s writing sequentially on only one fuse mount even with nvme
      backend. On the other side, with 4 to 6 clients, you can easily
      reach 1.5GB/s of aggregate throughput <br>
      <br>
      To start, I think is better to try with default parameters for
      your replica volume.<br>
      <br>
      Best regards!<br>
      <br>
      Ramon<br>
      <br>
       <br>
      <div class="moz-cite-prefix">El 12/12/23 a les 19:10, Danny ha
        escrit:<br>
      </div>
      <blockquote type="cite"
cite="mid:CAHbwLg4fqdKZnJeHXpnWw05s0E288GG5BqSeUmDE9RsTpV=tFg@mail.gmail.com">
        <meta http-equiv="content-type"
          content="text/html; charset=UTF-8">
        <div dir="ltr">Sorry, I noticed that too after I posted, so I
          instantly upgraded to 10. Issue remains. <br>
        </div>
        <br>
        <div class="gmail_quote">
          <div dir="ltr" class="gmail_attr">On Tue, Dec 12, 2023 at
            1:09 PM Gilberto Ferreira <<a
              href="mailto:gilberto.nunes32@gmail.com"
              moz-do-not-send="true" class="moz-txt-link-freetext">gilberto.nunes32@gmail.com</a>>
            wrote:<br>
          </div>
          <blockquote class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
            <div dir="ltr">I strongly suggest you update to version 10
              or higher. <br>
              It's come with significant improvement
              regarding performance.<br clear="all">
              <div>
                <div dir="ltr" class="gmail_signature">
                  <div dir="ltr">
                    <div dir="ltr">
                      <div dir="ltr">
                        <div dir="ltr">
                          <div dir="ltr">
                            <div>---</div>
                            <div>
                              <div>
                                <div>Gilberto Nunes Ferreira</div>
                              </div>
                              <div><span style="font-size:12.8px">(47)
                                  99676-7530 - Whatsapp / Telegram</span><br>
                              </div>
                              <div>
                                <p style="font-size:12.8px;margin:0px"><br>
                                </p>
                                <p style="font-size:12.8px;margin:0px"><br>
                                </p>
                              </div>
                            </div>
                            <div><br>
                            </div>
                          </div>
                        </div>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
              <br>
            </div>
            <br>
            <div class="gmail_quote">
              <div dir="ltr" class="gmail_attr">Em ter., 12 de dez. de
                2023 às 13:03, Danny <<a
                  href="mailto:dbray925%2Bgluster@gmail.com"
                  target="_blank" moz-do-not-send="true">dbray925+gluster@gmail.com</a>>
                escreveu:<br>
              </div>
              <blockquote class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                <div dir="ltr"> MTU is already 9000, and as you can see
                  from the IPERF results, I've got a nice, fast
                  connection between the nodes. </div>
                <br>
                <div class="gmail_quote">
                  <div dir="ltr" class="gmail_attr">On Tue, Dec 12, 2023
                    at 9:49 AM Strahil Nikolov <<a
                      href="mailto:hunter86_bg@yahoo.com"
                      target="_blank" moz-do-not-send="true"
                      class="moz-txt-link-freetext">hunter86_bg@yahoo.com</a>>
                    wrote:<br>
                  </div>
                  <blockquote class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                    <div> Hi,
                      <div><br>
                      </div>
                      <div>Let’s try the simple things:</div>
                      <div><br>
                      </div>
                      <div>Check if you can use MTU9000 and if it’s
                        possible, set it on the Bond Slaves and the bond
                        devices:</div>
                      <div><span> ping GLUSTER_PEER </span><span>-c 10
                          -M do -s 8972</span></div>
                      <div><br>
                        Then try to follow up the recommendations from <a
href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance"
                          target="_blank" moz-do-not-send="true"
                          class="moz-txt-link-freetext">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance</a> 
                        <div><br>
                        </div>
                        <div><br>
                        </div>
                        Best Regards,</div>
                      <div>Strahil Nikolov <br>
                        <br>
                        <p
style="font-size:15px;color:rgb(113,95,250);padding-top:15px;margin-top:0px">On
                          Monday, December 11, 2023, 3:32 PM, Danny <<a
                            href="mailto:dbray925%2Bgluster@gmail.com"
                            target="_blank" moz-do-not-send="true">dbray925+gluster@gmail.com</a>>
                          wrote:</p>
                        <blockquote>
                          <div
id="m_-3283052153129726045m_98609290250261231m_810164770777586547yiv1869372156">
                            <div dir="ltr">
                              <div>Hello list, I'm hoping someone can
                                let me know what setting I missed.</div>
                              <div><br>
                              </div>
                              <div>Hardware:</div>
                              <div>Dell R650 servers, Dual 24 Core Xeon
                                2.8 GHz, 1 TB RAM<br>
                              </div>
                              <div>8x SSD s <span>Negotiated Speed</span>
                                12 Gbps</div>
                              <div>PERC H755 Controller - RAID 6 <br>
                              </div>
                              <div>Created virtual "data" disk from the
                                above 8 SSD drives, for a ~20 TB
                                /dev/sdb<br>
                              </div>
                              <div><br>
                              </div>
                              <div>OS:</div>
                              <div>CentOS Stream</div>
                              <div>kernel-4.18.0-526.el8.x86_64</div>
                              <div>glusterfs-7.9-1.el8.x86_64</div>
                              <div><br>
                              </div>
                              <div>IPERF Test between nodes:<br>
                                [ ID] Interval           Transfer    
                                Bitrate         Retr<br>
                                [  5]   0.00-10.00  sec  11.5 GBytes
                                 9.90 Gbits/sec    0             sender<br>
                                [  5]   0.00-10.04  sec  11.5 GBytes
                                 9.86 Gbits/sec                
                                 receiver<br>
                              </div>
                              <div><br>
                              </div>
                              <div>All good there. ~10 Gbps, as
                                expected.<br>
                              </div>
                              <div><br>
                              </div>
                              <div>LVM Install:</div>
                              <div>export DISK="/dev/sdb"<br>
                                sudo parted --script $DISK "mklabel gpt"<br>
                                sudo parted --script $DISK "mkpart
                                primary 0% 100%"<br>
                                sudo parted --script $DISK "set 1 lvm
                                on"</div>
                              <div>sudo pvcreate --dataalignment 128K
                                /dev/sdb1<br>
                                sudo vgcreate --physicalextentsize 128K
                                gfs_vg /dev/sdb1<br>
                                sudo lvcreate -L 16G -n gfs_pool_meta
                                gfs_vg<br>
                                sudo lvcreate -l 95%FREE -n gfs_pool
                                gfs_vg<br>
                                sudo lvconvert --chunksize 1280K
                                --thinpool gfs_vg/gfs_pool
                                --poolmetadata gfs_vg/gfs_pool_meta<br>
                                sudo lvchange --zero n gfs_vg/gfs_pool<br>
                                sudo lvcreate -V 19.5TiB --thinpool
                                gfs_vg/gfs_pool -n gfs_lv<br>
                                sudo mkfs.xfs -f -i size=512 -n
                                size=8192 -d su=128k,sw=10
                                /dev/mapper/gfs_vg-gfs_lv<br>
                                sudo vim /etc/fstab</div>
                              <div>/dev/mapper/gfs_vg-gfs_lv  
                                /gluster/data/brick   xfs      
                                rw,inode64,noatime,nouuid 0 0</div>
                              <div><br>
                              </div>
                              <div>sudo systemctl daemon-reload
                                && sudo mount -a<br>
                                fio --name=test
                                --filename=/gluster/data/brick/wow
                                --size=1G --readwrite=write<br>
                              </div>
                              <div><br>
                              </div>
                              <div>Run status group 0 (all jobs):<br>
                                  WRITE: bw=2081MiB/s (2182MB/s),
                                2081MiB/s-2081MiB/s (2182MB/s-2182MB/s),
                                io=1024MiB (1074MB), run=492-492msec<br>
                              </div>
                              <div><br>
                              </div>
                              <div>All good there. 2182MB/s =~ 17.5
                                Gbps. Nice!<br>
                              </div>
                              <div><br>
                              </div>
                              <div><br>
                              </div>
                              <div>Gluster install:</div>
                              <div>export NODE1='10.54.95.123'<br>
                                export NODE2='10.54.95.124'<br>
                                export NODE3='10.54.95.125'<br>
                                sudo gluster peer probe $NODE2<br>
                                sudo gluster peer probe $NODE3<br>
                                sudo gluster volume create data replica
                                3 arbiter 1 $NODE1:/gluster/data/brick
                                $NODE2:/gluster/data/brick
                                $NODE3:/gluster/data/brick force<br>
                                sudo gluster volume set data
                                network.ping-timeout 5<br>
                                sudo gluster volume set data
                                performance.client-io-threads on<br>
                                sudo gluster volume set data group
                                metadata-cache<br>
                                sudo gluster volume start data<br>
                                sudo gluster volume info all<br>
                              </div>
                              <div><br>
                                Volume Name: data<br>
                                Type: Replicate<br>
                                Volume ID:
                                b52b5212-82c8-4b1a-8db3-52468bc0226e<br>
                                Status: Started<br>
                                Snapshot Count: 0<br>
                                Number of Bricks: 1 x (2 + 1) = 3<br>
                                Transport-type: tcp<br>
                                Bricks:<br>
                                Brick1: 10.54.95.123:/gluster/data/brick<br>
                                Brick2: 10.54.95.124:/gluster/data/brick<br>
                                Brick3: 10.54.95.125:/gluster/data/brick
                                (arbiter)<br>
                                Options Reconfigured:<br>
                                network.inode-lru-limit: 200000<br>
                                performance.md-cache-timeout: 600<br>
                                performance.cache-invalidation: on<br>
                                performance.stat-prefetch: on<br>
                                features.cache-invalidation-timeout: 600<br>
                                features.cache-invalidation: on<br>
                                network.ping-timeout: 5<br>
                                transport.address-family: inet<br>
                                storage.fips-mode-rchecksum: on<br>
                                nfs.disable: on<br>
                                performance.client-io-threads: on</div>
                              <div><br>
                              </div>
                              <div>sudo vim /etc/fstab<br>
                              </div>
                              <div>localhost:/data             /data    
                                            glusterfs defaults,_netdev  
                                   0 0</div>
                              <div><br>
                              </div>
                              <div>sudo systemctl daemon-reload
                                && sudo mount -a</div>
                              <div>fio --name=test --filename=/data/wow
                                --size=1G --readwrite=write</div>
                              <div><br>
                              </div>
                              <div>Run status group 0 (all jobs):<br>
                                  WRITE: bw=109MiB/s (115MB/s),
                                109MiB/s-109MiB/s (115MB/s-115MB/s),
                                io=1024MiB (1074MB), run=9366-9366msec</div>
                              <div><br>
                              </div>
                              <div>Oh no, what's wrong? From 2182MB/s
                                down to only 115MB/s? What am I missing?
                                I'm not expecting the above ~17 Gbps,
                                but I'm thinking it should at least be
                                close(r) to ~10 Gbps. <br>
                              </div>
                              <div><br>
                              </div>
                              <div>Any suggestions?</div>
                            </div>
                          </div>
                          ________<br>
                          <br>
                          <br>
                          <br>
                          Community Meeting Calendar:<br>
                          <br>
                          Schedule -<br>
                          Every 2nd and 4th Tuesday at 14:30 IST / 09:00
                          UTC<br>
                          Bridge: <a
                            href="https://meet.google.com/cpu-eiue-hvk"
                            target="_blank" moz-do-not-send="true"
                            class="moz-txt-link-freetext">https://meet.google.com/cpu-eiue-hvk</a><br>
                          Gluster-users mailing list<br>
                          <a href="mailto:Gluster-users@gluster.org"
                            target="_blank" moz-do-not-send="true"
                            class="moz-txt-link-freetext">Gluster-users@gluster.org</a><br>
                          <a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
                            target="_blank" moz-do-not-send="true"
                            class="moz-txt-link-freetext">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
                        </blockquote>
                      </div>
                    </div>
                  </blockquote>
                </div>
                ________<br>
                <br>
                <br>
                <br>
                Community Meeting Calendar:<br>
                <br>
                Schedule -<br>
                Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
                Bridge: <a href="https://meet.google.com/cpu-eiue-hvk"
                  rel="noreferrer" target="_blank"
                  moz-do-not-send="true" class="moz-txt-link-freetext">https://meet.google.com/cpu-eiue-hvk</a><br>
                Gluster-users mailing list<br>
                <a href="mailto:Gluster-users@gluster.org"
                  target="_blank" moz-do-not-send="true"
                  class="moz-txt-link-freetext">Gluster-users@gluster.org</a><br>
                <a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
                  rel="noreferrer" target="_blank"
                  moz-do-not-send="true" class="moz-txt-link-freetext">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
              </blockquote>
            </div>
          </blockquote>
        </div>
        <br>
        <fieldset class="moz-mime-attachment-header"></fieldset>
        <pre class="moz-quote-pre" wrap="">________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a class="moz-txt-link-freetext"
        href="https://meet.google.com/cpu-eiue-hvk"
        moz-do-not-send="true">https://meet.google.com/cpu-eiue-hvk</a>
Gluster-users mailing list
<a class="moz-txt-link-abbreviated moz-txt-link-freetext"
        href="mailto:Gluster-users@gluster.org" moz-do-not-send="true">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext"
        href="https://lists.gluster.org/mailman/listinfo/gluster-users"
        moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
      </blockquote>
      <br>
    </blockquote>
    <br>
  </body>
</html>