<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    On 04/13/17 23:50, Pranith Kumar Karampuri wrote:<br>
    <blockquote
cite="mid:CAOgeEnbijXj0VyMzi21ut-XGNhukLqU4L37AgWczB2c02yh7NQ@mail.gmail.com"
      type="cite">
      <div dir="ltr"><br>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Sat, Apr 8, 2017 at 10:28 AM,
            Ravishankar N <span dir="ltr">&lt;<a moz-do-not-send="true"
                href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">
              <div bgcolor="#FFFFFF">
                <div class="gmail-m_1278894059907384689moz-cite-prefix">Hi
                  Pat,<br>
                  <br>
                  I'm assuming you are using gluster native (fuse
                  mount). If it helps, you could try mounting it via
                  gluster NFS (gnfs) and then see if there is an
                  improvement in speed. Fuse mounts are slower than gnfs
                  mounts but you get the benefit of avoiding a single
                  point of failure. Unlike fuse mounts, if the gluster
                  node containing the gnfs server goes down, all mounts
                  done using that node will fail). For fuse mounts, you
                  could try tweaking the write-behind xlator settings to
                  see if it helps. See the performance.write-behind and
                  performance.write-behind-<wbr>window-size options in
                  `gluster volume set help`. Of course, even for gnfs
                  mounts, you can achieve fail-over by using CTDB.<br>
                </div>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>Ravi,<br>
            </div>
            <div>      Do you have any data that suggests fuse mounts
              are slower than gNFS servers? <br>
              <br>
            </div>
            <div>Pat,<br>
            </div>
            <div>      I see that I am late to the thread, but do you
              happen to have "profile info" of the workload?<br>
              <br>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    I have done actual testing. For directory ops, NFS is faster due to
    the default cache settings in the kernel. For raw throughput, or ops
    on an open file, fuse is faster.<br>
    <br>
    I have yet to test this but I expect with the newer caching features
    in 3.8+, even directory op performance should be similar to nfs and
    more accurate.<br>
    <br>
    <blockquote
cite="mid:CAOgeEnbijXj0VyMzi21ut-XGNhukLqU4L37AgWczB2c02yh7NQ@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div>You can follow <a moz-do-not-send="true"
href="https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/">https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/</a>
              to get the information.<br>
            </div>
            <div> </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">
              <div bgcolor="#FFFFFF">
                <div class="gmail-m_1278894059907384689moz-cite-prefix">
                  <br>
                  Thanks,<br>
                  Ravi
                  <div>
                    <div class="gmail-h5"><br>
                      <br>
                      On 04/08/2017 12:07 AM, Pat Haley wrote:<br>
                    </div>
                  </div>
                </div>
                <blockquote type="cite">
                  <div>
                    <div class="gmail-h5"> <br>
                      Hi,<br>
                      <br>
                      We noticed a dramatic slowness when writing to a
                      gluster disk when compared to writing to an NFS
                      disk. Specifically when using dd (data duplicator)
                      to write a 4.3 GB file of zeros:<br>
                      <ul>
                        <li>on NFS disk (/home): 9.5 Gb/s</li>
                        <li>on gluster disk (/gdata): 508 Mb/s<br>
                        </li>
                      </ul>
                      The gluser disk is 2 bricks joined together, no
                      replication or anything else. The hardware is
                      (literally) the same:<br>
                      <ul>
                        <li>one server with 70 hard disks  and a
                          hardware RAID card.</li>
                        <li>4 disks in a RAID-6 group (the NFS disk)</li>
                        <li>32 disks in a RAID-6 group (the max allowed
                          by the card, /mnt/brick1)</li>
                        <li>32 disks in another RAID-6 group
                          (/mnt/brick2)</li>
                        <li>2 hot spare<br>
                        </li>
                      </ul>
                      <p>Some additional information and more tests
                        results (after changing the log level):<br>
                      </p>
                      <p><span>glusterfs 3.7.11 built on Apr 27 2016
                          14:09:22</span><br>
                        <span>CentOS release 6.8 (Final)</span><br>
                        RAID bus controller: LSI Logic / Symbios Logic
                        MegaRAID SAS-3 3108 [Invader] (rev 02)<br>
                        <br>
                        <br>
                        <br>
                        <b>Create the file to /gdata (gluster)</b><br>
                        [root@mseas-data2 gdata]# dd if=/dev/zero
                        of=/gdata/zero1 bs=1M count=1000<br>
                        1000+0 records in<br>
                        1000+0 records out<br>
                        1048576000 bytes (1.0 GB) copied, 1.91876 s, <b>546
                          MB/s</b><br>
                        <br>
                        <b>Create the file to /home (ext4)</b><br>
                        [root@mseas-data2 gdata]# dd if=/dev/zero
                        of=/home/zero1 bs=1M count=1000<br>
                        1000+0 records in<br>
                        1000+0 records out<br>
                        1048576000 bytes (1.0 GB) copied, 0.686021 s, <b>1.5
                          GB/s - </b>3 times as fast<b><br>
                          <br>
                          <br>
                          Copy from /gdata to /gdata (gluster to
                          gluster)<br>
                        </b>[root@mseas-data2 gdata]# dd if=/gdata/zero1
                        of=/gdata/zero2<br>
                        2048000+0 records in<br>
                        2048000+0 records out<br>
                        1048576000 bytes (1.0 GB) copied, 101.052 s, <b>10.4
                          MB/s</b> - realllyyy slooowww<br>
                        <br>
                        <br>
                        <b>Copy from /gdata to /gdata</b> <b>2nd time <b>(gluster
                            to gluster)</b></b><br>
                        [root@mseas-data2 gdata]# dd if=/gdata/zero1
                        of=/gdata/zero2<br>
                        2048000+0 records in<br>
                        2048000+0 records out<br>
                        1048576000 bytes (1.0 GB) copied, 92.4904 s, <b>11.3
                          MB/s</b> <span>- realllyyy slooowww</span>
                        again<br>
                        <br>
                        <br>
                        <br>
                        <b>Copy from /home to /home (ext4 to ext4)</b><br>
                        [root@mseas-data2 gdata]# dd if=/home/zero1
                        of=/home/zero2<br>
                        2048000+0 records in<br>
                        2048000+0 records out<br>
                        1048576000 bytes (1.0 GB) copied, 3.53263 s, <b>297
                          MB/s </b>30 times as fast<br>
                        <br>
                        <br>
                        <b>Copy from /home to /home (ext4 to ext4)</b><br>
                        [root@mseas-data2 gdata]# dd if=/home/zero1
                        of=/home/zero3<br>
                        2048000+0 records in<br>
                        2048000+0 records out<br>
                        1048576000 bytes (1.0 GB) copied, 4.1737 s, <b>251
                          MB/s</b> <span>- 30 times as fast<br>
                          <br>
                          <br>
                          As a test, can we copy data directly to the
                          xfs mountpoint (/mnt/brick1) and bypass
                          gluster?<br>
                          <br>
                          <br>
                          Any help you could give us would be
                          appreciated.<br>
                          <br>
                        </span>Thanks<br>
                      </p>
                      <pre class="gmail-m_1278894059907384689moz-signature" cols="72">-- 

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=<wbr>-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=<wbr>-=-=-
Pat Haley                          Email:  <a moz-do-not-send="true" class="gmail-m_1278894059907384689moz-txt-link-abbreviated" href="mailto:phaley@mit.edu" target="_blank">phaley@mit.edu</a>
Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213                    <a moz-do-not-send="true" class="gmail-m_1278894059907384689moz-txt-link-freetext" href="http://web.mit.edu/phaley/www/" target="_blank">http://web.mit.edu/phaley/www/</a>
77 Massachusetts Avenue
Cambridge, MA  02139-4301
</pre>
      

      <fieldset class="gmail-m_1278894059907384689mimeAttachmentHeader"></fieldset>
      

      </div></div><pre>______________________________<wbr>_________________
Gluster-users mailing list
<a moz-do-not-send="true" class="gmail-m_1278894059907384689moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" class="gmail-m_1278894059907384689moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <p>

    </p>
  </div>


______________________________<wbr>_________________

Gluster-users mailing list

<a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>

<a moz-do-not-send="true" href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a>
</blockquote></div>


-- 
<div class="gmail_signature"><div dir="ltr">Pranith
</div></div>
</div></div>


<fieldset class="mimeAttachmentHeader"></fieldset>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>

</blockquote>
</body></html>