<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    Hi Pranith,<br>
    <br>
    I presume you are asking for some version of the profile data that
    just shows the dd test (or a repeat of the dd test).  If yes, how do
    I extract just that data?<br>
    <br>
    Thanks<br>
    <br>
    Pat<br>
    <br>
    <br>
    <br>
    <div class="moz-cite-prefix">On 05/05/2017 10:58 AM, Pranith Kumar
      Karampuri wrote:<br>
    </div>
    <blockquote
cite="mid:CAOgeEnYKv+v=Dv3y0tC9WVJSfN=dH2U9QmrXqwOOvbTGZD=yWw@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <div dir="ltr">
        <div>
          <div>
            <div>hi Pat,<br>
            </div>
                  Let us concentrate on the performance numbers part for
            now. We will look at the permissions one after this?<br>
            <br>
          </div>
          As per the profile info, only 2.6% of the work-load is writes.
          There are too many Lookups.<br>
          <br>
        </div>
        Would it be possible to get the data for just the dd test you
        were doing earlier?<br>
        <br>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Fri, May 5, 2017 at 8:14 PM, Pat
          Haley <span dir="ltr">&lt;<a moz-do-not-send="true"
              href="mailto:phaley@mit.edu" target="_blank">phaley@mit.edu</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000"> <br>
              Hi Pranith &amp; Ravi,<br>
              <br>
              A couple of quick questions<br>
              <br>
              We have profile turned on. Are there specific queries we
              should make that would help debug our configuration?  (The
              default profile info was previously sent in <a
                moz-do-not-send="true"
                class="m_-8630651799581052987moz-txt-link-freetext"
href="http://lists.gluster.org/pipermail/gluster-users/2017-May/030840.html"
                target="_blank">http://lists.gluster.org/<wbr>pipermail/gluster-users/2017-<wbr>May/030840.html</a>
              but I'm not sure if that is what you were looking for.)<br>
              <br>
              We also started to do a test on serving gluster over NFS. 
              We rediscovered an issue we previously reported (
              <a moz-do-not-send="true"
                class="m_-8630651799581052987moz-txt-link-freetext"
href="http://lists.gluster.org/pipermail/gluster-users/2016-September/028289.html"
                target="_blank">http://lists.gluster.org/<wbr>pipermail/gluster-users/2016-<wbr>September/028289.html</a>
              ) in that the NFS mounted version was ignoring the group
              write permissions.  What specific information would be
              useful in debugging this?<br>
              <br>
              Thanks<span class="HOEnZb"><font color="#888888"><br>
                  <br>
                  Pat</font></span>
              <div>
                <div class="h5"><br>
                  <br>
                  <br>
                  <div class="m_-8630651799581052987moz-cite-prefix">On
                    04/14/2017 03:01 AM, Ravishankar N wrote:<br>
                  </div>
                  <blockquote type="cite">
                    <div class="m_-8630651799581052987moz-cite-prefix">On
                      04/14/2017 12:20 PM, Pranith Kumar Karampuri
                      wrote:<br>
                    </div>
                    <blockquote type="cite">
                      <div dir="ltr"><br>
                        <div class="gmail_extra"><br>
                          <div class="gmail_quote">On Sat, Apr 8, 2017
                            at 10:28 AM, Ravishankar N <span dir="ltr">&lt;<a
                                moz-do-not-send="true"
                                href="mailto:ravishankar@redhat.com"
                                target="_blank">ravishankar@redhat.com</a>&gt;</span>
                            wrote:<br>
                            <blockquote class="gmail_quote"
                              style="margin:0px 0px 0px
                              0.8ex;border-left:1px solid
                              rgb(204,204,204);padding-left:1ex">
                              <div bgcolor="#FFFFFF">
                                <div
                                  class="m_-8630651799581052987gmail-m_1278894059907384689moz-cite-prefix">Hi
                                  Pat,<br>
                                  <br>
                                  I'm assuming you are using gluster
                                  native (fuse mount). If it helps, you
                                  could try mounting it via gluster NFS
                                  (gnfs) and then see if there is an
                                  improvement in speed. Fuse mounts are
                                  slower than gnfs mounts but you get
                                  the benefit of avoiding a single point
                                  of failure. Unlike fuse mounts, if the
                                  gluster node containing the gnfs
                                  server goes down, all mounts done
                                  using that node will fail). For fuse
                                  mounts, you could try tweaking the
                                  write-behind xlator settings to see if
                                  it helps. See the
                                  performance.write-behind and
                                  performance.write-behind-windo<wbr>w-size
                                  options in `gluster volume set help`.
                                  Of course, even for gnfs mounts, you
                                  can achieve fail-over by using CTDB.<br>
                                </div>
                              </div>
                            </blockquote>
                            <div><br>
                            </div>
                            <div>Ravi,<br>
                            </div>
                            <div>      Do you have any data that
                              suggests fuse mounts are slower than gNFS
                              servers? <br>
                            </div>
                          </div>
                        </div>
                      </div>
                    </blockquote>
                    I have heard anecdotal evidence time and again on
                    the ML and IRC, which is why I wanted to compare it
                    with NFS numbers on his setup. <br>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div class="gmail_extra">
                          <div class="gmail_quote">
                            <div><br>
                            </div>
                            <div>Pat,<br>
                            </div>
                            <div>      I see that I am late to the
                              thread, but do you happen to have "profile
                              info" of the workload?<br>
                              <br>
                            </div>
                            <div>You can follow <a
                                moz-do-not-send="true"
href="https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/"
                                target="_blank">https://gluster.readthedocs.<wbr>io/en/latest/Administrator%<wbr>20Guide/Monitoring%20Workload/</a>
                              to get the information.<br>
                            </div>
                          </div>
                        </div>
                      </div>
                    </blockquote>
                    Yeah, Let's see if profile info shows up anything
                    interesting.<br>
                    -Ravi<br>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div class="gmail_extra">
                          <div class="gmail_quote">
                            <div> </div>
                            <blockquote class="gmail_quote"
                              style="margin:0px 0px 0px
                              0.8ex;border-left:1px solid
                              rgb(204,204,204);padding-left:1ex">
                              <div bgcolor="#FFFFFF">
                                <div
                                  class="m_-8630651799581052987gmail-m_1278894059907384689moz-cite-prefix">
                                  <br>
                                  Thanks,<br>
                                  Ravi
                                  <div>
                                    <div
                                      class="m_-8630651799581052987gmail-h5"><br>
                                      <br>
                                      On 04/08/2017 12:07 AM, Pat Haley
                                      wrote:<br>
                                    </div>
                                  </div>
                                </div>
                                <blockquote type="cite">
                                  <div>
                                    <div
                                      class="m_-8630651799581052987gmail-h5">
                                      <br>
                                      Hi,<br>
                                      <br>
                                      We noticed a dramatic slowness
                                      when writing to a gluster disk
                                      when compared to writing to an NFS
                                      disk. Specifically when using dd
                                      (data duplicator) to write a 4.3
                                      GB file of zeros:<br>
                                      <ul>
                                        <li>on NFS disk (/home): 9.5
                                          Gb/s</li>
                                        <li>on gluster disk (/gdata):
                                          508 Mb/s<br>
                                        </li>
                                      </ul>
                                      The gluser disk is 2 bricks joined
                                      together, no replication or
                                      anything else. The hardware is
                                      (literally) the same:<br>
                                      <ul>
                                        <li>one server with 70 hard
                                          disks  and a hardware RAID
                                          card.</li>
                                        <li>4 disks in a RAID-6 group
                                          (the NFS disk)</li>
                                        <li>32 disks in a RAID-6 group
                                          (the max allowed by the card,
                                          /mnt/brick1)</li>
                                        <li>32 disks in another RAID-6
                                          group (/mnt/brick2)</li>
                                        <li>2 hot spare<br>
                                        </li>
                                      </ul>
                                      <p>Some additional information and
                                        more tests results (after
                                        changing the log level):<br>
                                      </p>
                                      <p><span>glusterfs 3.7.11 built on
                                          Apr 27 2016 14:09:22</span><br>
                                        <span>CentOS release 6.8 (Final)</span><br>
                                        RAID bus controller: LSI Logic /
                                        Symbios Logic MegaRAID SAS-3
                                        3108 [Invader] (rev 02)<br>
                                        <br>
                                        <br>
                                        <br>
                                        <b>Create the file to /gdata
                                          (gluster)</b><br>
                                        [root@mseas-data2 gdata]# dd
                                        if=/dev/zero of=/gdata/zero1
                                        bs=1M count=1000<br>
                                        1000+0 records in<br>
                                        1000+0 records out<br>
                                        1048576000 bytes (1.0 GB)
                                        copied, 1.91876 s, <b>546 MB/s</b><br>
                                        <br>
                                        <b>Create the file to /home
                                          (ext4)</b><br>
                                        [root@mseas-data2 gdata]# dd
                                        if=/dev/zero of=/home/zero1
                                        bs=1M count=1000<br>
                                        1000+0 records in<br>
                                        1000+0 records out<br>
                                        1048576000 bytes (1.0 GB)
                                        copied, 0.686021 s, <b>1.5 GB/s
                                          - </b>3 times as fast<b><br>
                                          <br>
                                          <br>
                                          Copy from /gdata to /gdata
                                          (gluster to gluster)<br>
                                        </b>[root@mseas-data2 gdata]# dd
                                        if=/gdata/zero1 of=/gdata/zero2<br>
                                        2048000+0 records in<br>
                                        2048000+0 records out<br>
                                        1048576000 bytes (1.0 GB)
                                        copied, 101.052 s, <b>10.4 MB/s</b>
                                        - realllyyy slooowww<br>
                                        <br>
                                        <br>
                                        <b>Copy from /gdata to /gdata</b>
                                        <b>2nd time <b>(gluster to
                                            gluster)</b></b><br>
                                        [root@mseas-data2 gdata]# dd
                                        if=/gdata/zero1 of=/gdata/zero2<br>
                                        2048000+0 records in<br>
                                        2048000+0 records out<br>
                                        1048576000 bytes (1.0 GB)
                                        copied, 92.4904 s, <b>11.3 MB/s</b>
                                        <span>- realllyyy slooowww</span>
                                        again<br>
                                        <br>
                                        <br>
                                        <br>
                                        <b>Copy from /home to /home
                                          (ext4 to ext4)</b><br>
                                        [root@mseas-data2 gdata]# dd
                                        if=/home/zero1 of=/home/zero2<br>
                                        2048000+0 records in<br>
                                        2048000+0 records out<br>
                                        1048576000 bytes (1.0 GB)
                                        copied, 3.53263 s, <b>297 MB/s
                                        </b>30 times as fast<br>
                                        <br>
                                        <br>
                                        <b>Copy from /home to /home
                                          (ext4 to ext4)</b><br>
                                        [root@mseas-data2 gdata]# dd
                                        if=/home/zero1 of=/home/zero3<br>
                                        2048000+0 records in<br>
                                        2048000+0 records out<br>
                                        1048576000 bytes (1.0 GB)
                                        copied, 4.1737 s, <b>251 MB/s</b>
                                        <span>- 30 times as fast<br>
                                          <br>
                                          <br>
                                          As a test, can we copy data
                                          directly to the xfs mountpoint
                                          (/mnt/brick1) and bypass
                                          gluster?<br>
                                          <br>
                                          <br>
                                          Any help you could give us
                                          would be appreciated.<br>
                                          <br>
                                        </span>Thanks<br>
                                      </p>
                                      <pre class="m_-8630651799581052987gmail-m_1278894059907384689moz-signature" cols="72">-- 

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=<wbr>-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=<wbr>-=-=-
Pat Haley                          Email:  <a moz-do-not-send="true" class="m_-8630651799581052987gmail-m_1278894059907384689moz-txt-link-abbreviated" href="mailto:phaley@mit.edu" target="_blank">phaley@mit.edu</a>
Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213                    <a moz-do-not-send="true" class="m_-8630651799581052987gmail-m_1278894059907384689moz-txt-link-freetext" href="http://web.mit.edu/phaley/www/" target="_blank">http://web.mit.edu/phaley/www/</a>
77 Massachusetts Avenue
Cambridge, MA  02139-4301
</pre>
      

      <fieldset class="m_-8630651799581052987gmail-m_1278894059907384689mimeAttachmentHeader"></fieldset>
      

      </div></div><pre>______________________________<wbr>_________________
Gluster-users mailing list
<a moz-do-not-send="true" class="m_-8630651799581052987gmail-m_1278894059907384689moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" class="m_-8630651799581052987gmail-m_1278894059907384689moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></pre>
    </blockquote>
    <p>

    </p>
  </div>


______________________________<wbr>_________________

Gluster-users mailing list

<a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>

<a moz-do-not-send="true" href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a>
</blockquote></div>


-- 
<div class="m_-8630651799581052987gmail_signature"><div dir="ltr">Pranith
</div></div>
</div></div>



</blockquote><p>
</p>


</blockquote>
<pre class="m_-8630651799581052987moz-signature" cols="72">-- 

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=<wbr>-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=<wbr>-=-=-
Pat Haley                          Email:  <a moz-do-not-send="true" class="m_-8630651799581052987moz-txt-link-abbreviated" href="mailto:phaley@mit.edu" target="_blank">phaley@mit.edu</a>
Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213                    <a moz-do-not-send="true" class="m_-8630651799581052987moz-txt-link-freetext" href="http://web.mit.edu/phaley/www/" target="_blank">http://web.mit.edu/phaley/www/</a>
77 Massachusetts Avenue
Cambridge, MA  02139-4301
</pre></div></div></div></blockquote></div>


-- 
<div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith
</div></div>
</div>



</blockquote>
<pre class="moz-signature" cols="72">-- 

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley                          Email:  <a class="moz-txt-link-abbreviated" href="mailto:phaley@mit.edu">phaley@mit.edu</a>
Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213                    <a class="moz-txt-link-freetext" href="http://web.mit.edu/phaley/www/">http://web.mit.edu/phaley/www/</a>
77 Massachusetts Avenue
Cambridge, MA  02139-4301
</pre></body></html>