<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html;
      charset=windows-1252">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">Below you can find three fio commands
      used for running each benchmark test, sequential write, random 4k
      read and random 4k write.<br>
      <br>
      <font size="-1"><tt># fio --name=writefile --size=10G
          --filesize=10G --filename=fio_file --bs=1M --nrfiles=1
          --direct=1 --sync=0 --randrepeat=0 --rw=write --refill_buffers
          --end_fsync=1 --iodepth=200 --ioengine=libaio <br>
          <br>
          # fio --time_based --name=benchmark --size=10G --runtime=30
          --filename=fio_file --ioengine=libaio --randrepeat=0
          --iodepth=128 --direct=1 --invalidate=1 --verify=0
          --verify_fatal=0 --numjobs=4 --rw=randread --blocksize=4k
          --group_reporting<br>
          <br>
          # fio --time_based --name=benchmark --size=10G --runtime=30
          --filename=fio_file --ioengine=libaio --randrepeat=0
          --iodepth=128 --direct=1 --invalidate=1 --verify=0
          --verify_fatal=0 --numjobs=4 --rw=randwrite --blocksize=4k
          --group_reporting<br>
        </tt></font><br>
      And here timed extraction of kernel source, first run:<br>
      <font size="-1"><tt><br>
        </tt><tt># time tar xf linux-4.13.11.tar.xz</tt><tt><br>
        </tt><tt><br>
        </tt><tt>real    0m8.180s</tt><tt><br>
        </tt><tt>
          user    0m5.932s</tt><tt><br>
        </tt><tt>
          sys     0m2.924s</tt></font><br>
      <br>
      second run, after deleting first:<br>
      <br>
      <tt><font size="-1"># rm -rf linux-4.13.11<br>
          # time tar xf linux-4.13.11.tar.xz<br>
          <br>
          real    0m6.454s<br>
          user    0m6.012s<br>
          sys     0m2.440s</font><br>
      </tt><br>
      <br>
      El 03/11/17 a les 09:33, Gandalf Corvotempesta ha escrit:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAJH6TXggHKh9Kw3BiwfN_mcOyu0pp_DZiT0sPsQ9-_guh-Ynaw@mail.gmail.com">
      <div dir="auto">Could you please share fio command line used for
        this test?
        <div dir="auto">Additionally, can you tell me the time needed to
          extract the kernel source?</div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">Il 2 nov 2017 11:24 PM, "Ramon Selga"
          &lt;<a href="mailto:ramon.selga@gmail.com"
            moz-do-not-send="true">ramon.selga@gmail.com</a>&gt; ha
          scritto:<br type="attribution">
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div text="#000000" bgcolor="#FFFFFF">
              <div class="m_5627811959350600145moz-cite-prefix">Hi,<br>
                <br>
                Just for your reference we got some similar values in a
                customer setup with three nodes single Xeon and 4x8TB
                HDD each with a double 10GbE backbone.<br>
                <br>
                We did a simple benchmark with fio tool on a virtual
                disk (virtio) of a 1TiB of size, XFS formatted directly
                no partitions no LVM, inside a VM (debian stretch, dual
                core 4GB RAM) deployed in a gluster volume disperse 3
                redundancy 1 distributed 2, sharding enabled.<br>
                <br>
                We run a sequential write test 10GB file in 1024k
                blocks, a random read test with 4k blocks and a random
                write test also with 4k blocks several times with
                results very similar to the following:<br>
                <br>
                <font size="-1"><tt>writefile: (g=0): rw=write,
                    bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=200</tt><tt><br>
                  </tt><tt>fio-2.16</tt><tt><br>
                  </tt><tt>Starting 1 process</tt><tt><br>
                  </tt><tt><br>
                  </tt><tt>writefile: (groupid=0, jobs=1): err= 0:
                    pid=11515: Thu Nov  2 16:50:05 2017</tt><tt><br>
                  </tt><tt>  write: io=10240MB, bw=473868KB/s, iops=462,
                    runt= 22128msec</tt><tt><br>
                  </tt><tt>    slat (usec): min=20, max=98830,
                    avg=1972.11, stdev=6612.81</tt><tt><br>
                  </tt><tt>    clat (msec): min=150, max=2979,
                    avg=428.49, stdev=189.96</tt><tt><br>
                  </tt><tt>     lat (msec): min=151, max=2979,
                    avg=430.47, stdev=189.90</tt><tt><br>
                  </tt><tt>    clat percentiles (msec):</tt><tt><br>
                  </tt><tt>     |  1.00th=[  204],  5.00th=[  249],
                    10.00th=[  273], 20.00th=[  293],</tt><tt><br>
                  </tt><tt>     | 30.00th=[  306], 40.00th=[  318],
                    50.00th=[  351], 60.00th=[  502],</tt><tt><br>
                  </tt><tt>     | 70.00th=[  545], 80.00th=[  578],
                    90.00th=[  603], 95.00th=[  627],</tt><tt><br>
                  </tt><tt>     | 99.00th=[  717], 99.50th=[  775],
                    99.90th=[ 2966], 99.95th=[ 2966],</tt><tt><br>
                  </tt><tt>     | 99.99th=[ 2966]</tt><tt><br>
                  </tt><tt>    lat (msec) : 250=5.09%, 500=54.65%,
                    750=39.64%, 1000=0.31%, 2000=0.07%</tt><tt><br>
                  </tt><tt>    lat (msec) : &gt;=2000=0.24%</tt><tt><br>
                  </tt><tt>  cpu          : usr=7.81%, sys=1.48%,
                    ctx=1221, majf=0, minf=11</tt><tt><br>
                  </tt><tt>  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%,
                    8=0.1%, 16=0.2%, 32=0.3%, &gt;=64=99.4%</tt><tt><br>
                  </tt><tt>     submit    : 0=0.0%, 4=100.0%, 8=0.0%,
                    16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%</tt><tt><br>
                  </tt><tt>     complete  : 0=0.0%, 4=100.0%, 8=0.0%,
                    16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.1%</tt><tt><br>
                  </tt><tt>     issued    : total=r=0/w=10240/d=0,
                    short=r=0/w=0/d=0, drop=r=0/w=0/d=0</tt><tt><br>
                  </tt><tt>     latency   : target=0, window=0,
                    percentile=100.00%, depth=200</tt><tt><br>
                  </tt><tt><br>
                  </tt><tt>Run status group 0 (all jobs):</tt><tt><br>
                  </tt><tt>  WRITE: io=10240MB, aggrb=473868KB/s,
                    minb=473868KB/s, maxb=473868KB/s, mint=22128msec,
                    maxt=22128msec</tt><tt><br>
                  </tt><tt><br>
                  </tt><tt>Disk stats (read/write):</tt><tt><br>
                  </tt><tt>  vdg: ios=0/10243, merge=0/0,
                    ticks=0/2745892, in_queue=2745884, util=99.18</tt></font><br>
                <br>
                <font size="-1"><tt>benchmark: (g=0): rw=randread,
                    bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=128</tt><tt><br>
                  </tt><tt>...</tt><tt><br>
                  </tt><tt>fio-2.16</tt><tt><br>
                  </tt><tt>Starting 4 processes</tt><tt><br>
                  </tt><tt><br>
                  </tt><tt>benchmark: (groupid=0, jobs=4): err= 0:
                    pid=11529: Thu Nov  2 16:52:40 2017</tt><tt><br>
                  </tt><tt>  read : io=1123.9MB, bw=38347KB/s,
                    iops=9586, runt= 30011msec</tt><tt><br>
                  </tt><tt>    slat (usec): min=1, max=228886,
                    avg=415.40, stdev=3975.72</tt><tt><br>
                  </tt><tt>    clat (usec): min=482, max=328648,
                    avg=52664.65, stdev=30216.00</tt><tt><br>
                  </tt><tt>     lat (msec): min=9, max=527, avg=53.08,
                    stdev=30.38</tt><tt><br>
                  </tt><tt>    clat percentiles (msec):</tt><tt><br>
                  </tt><tt>     |  1.00th=[   12],  5.00th=[   22],
                    10.00th=[   23], 20.00th=[   25],</tt><tt><br>
                  </tt><tt>     | 30.00th=[   33], 40.00th=[   38],
                    50.00th=[   47], 60.00th=[   55],</tt><tt><br>
                  </tt><tt>     | 70.00th=[   64], 80.00th=[   76],
                    90.00th=[   95], 95.00th=[  111],</tt><tt><br>
                  </tt><tt>     | 99.00th=[  151], 99.50th=[  163],
                    99.90th=[  192], 99.95th=[  196],</tt><tt><br>
                  </tt><tt>     | 99.99th=[  210]</tt><tt><br>
                  </tt><tt>    lat (usec) : 500=0.01%, 750=0.01%,
                    1000=0.01%</tt><tt><br>
                  </tt><tt>    lat (msec) : 10=0.03%, 20=3.59%,
                    50=52.41%, 100=36.01%, 250=7.96%</tt><tt><br>
                  </tt><tt>    lat (msec) : 500=0.01%</tt><tt><br>
                  </tt><tt>  cpu          : usr=0.29%, sys=1.10%,
                    ctx=10157, majf=0, minf=549</tt><tt><br>
                  </tt><tt>  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%,
                    8=0.1%, 16=0.1%, 32=0.1%, &gt;=64=99.9%</tt><tt><br>
                  </tt><tt>     submit    : 0=0.0%, 4=100.0%, 8=0.0%,
                    16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%</tt><tt><br>
                  </tt><tt>     complete  : 0=0.0%, 4=100.0%, 8=0.0%,
                    16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.1%</tt><tt><br>
                  </tt><tt>     issued    : total=r=287705/w=0/d=0,
                    short=r=0/w=0/d=0, drop=r=0/w=0/d=0</tt><tt><br>
                  </tt><tt>     latency   : target=0, window=0,
                    percentile=100.00%, depth=128</tt><tt><br>
                  </tt><tt><br>
                  </tt><tt>Run status group 0 (all jobs):</tt><tt><br>
                  </tt><tt>   READ: io=1123.9MB, aggrb=38346KB/s,
                    minb=38346KB/s, maxb=38346KB/s, mint=30011msec,
                    maxt=30011msec</tt><tt><br>
                  </tt><tt><br>
                  </tt><tt>Disk stats (read/write):</tt><tt><br>
                  </tt><tt>  vdg: ios=286499/2, merge=0/0,
                    ticks=3707064/64, in_queue=3708680, util=99.83%<br>
                    <br>
                    benchmark: (g=0): rw=randwrite,
                    bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=128<br>
                    ...<br>
                    fio-2.16<br>
                    Starting 4 processes<br>
                    <br>
                    benchmark: (groupid=0, jobs=4): err= 0: pid=11545:
                    Thu Nov  2 16:55:54 2017<br>
                      write: io=422464KB, bw=14079KB/s, iops=3519, runt=
                    30006msec<br>
                        slat (usec): min=1, max=230620, avg=1130.75,
                    stdev=6744.31<br>
                        clat (usec): min=643, max=540987, avg=143999.57,
                    stdev=66693.45<br>
                         lat (msec): min=8, max=541, avg=145.13,
                    stdev=67.01<br>
                        clat percentiles (msec):<br>
                         |  1.00th=[   34],  5.00th=[   75], 10.00th=[  
                    87], 20.00th=[  100],<br>
                         | 30.00th=[  109], 40.00th=[  116], 50.00th=[ 
                    123], 60.00th=[  135],<br>
                         | 70.00th=[  151], 80.00th=[  182], 90.00th=[ 
                    241], 95.00th=[  289],<br>
                         | 99.00th=[  359], 99.50th=[  416], 99.90th=[ 
                    465], 99.95th=[  490],<br>
                         | 99.99th=[  529]<br>
                        lat (usec) : 750=0.01%, 1000=0.01%<br>
                        lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%,
                    20=0.05%, 50=1.80%<br>
                        lat (msec) : 100=18.07%, 250=71.25%, 500=8.80%,
                    750=0.02%<br>
                      cpu          : usr=0.29%, sys=1.28%, ctx=115493,
                    majf=0, minf=33<br>
                      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%,
                    16=0.1%, 32=0.1%, &gt;=64=99.8%<br>
                         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%,
                    32=0.0%, 64=0.0%, &gt;=64=0.0%<br>
                         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%,
                    32=0.0%, 64=0.0%, &gt;=64=0.1%<br>
                         issued    : total=r=0/w=105616/d=0,
                    short=r=0/w=0/d=0, drop=r=0/w=0/d=0<br>
                         latency   : target=0, window=0,
                    percentile=100.00%, depth=128<br>
                    <br>
                    Run status group 0 (all jobs):<br>
                      WRITE: io=422464KB, aggrb=14079KB/s,
                    minb=14079KB/s, maxb=14079KB/s, mint=30006msec,
                    maxt=30006msec<br>
                    <br>
                    Disk stats (read/write):<br>
                      vdg: ios=0/105235, merge=0/0, ticks=0/3727048,
                    in_queue=3734796, util=99.81%<br>
                    <br>
                    <br>
                  </tt></font><font face="Futura Bk BT">Basically we got
                  sequential write around 470MBps, random read 4k
                  9500IOPS and random write 4k 3500IOPS.<br>
                  <br>
                  Hope it helps!<br>
                  <br>
                </font><br>
                El 01/11/17 a les 12:03, Shyam Ranganathan ha escrit:<br>
              </div>
              <blockquote type="cite">On 10/31/2017 08:36 PM, Ben Turner
                wrote: <br>
                <blockquote type="cite">
                  <blockquote type="cite">* Erasure coded volumes with
                    sharding - seen as a good fit for VM disk <br>
                    storage <br>
                  </blockquote>
                  I am working on this with a customer, we have been
                  able to do 400-500 MB / sec writes!  Normally things
                  max out at ~150-250.  The trick is to use multiple
                  files, create the lvm stack and use native LVM
                  striping.  We have found that 4-6 files seems to give
                  the best perf on our setup.  I don't think we are
                  using sharding on the EC vols, just multiple files and
                  LVM striping.  Sharding may be able to avoid the LVM
                  striping, but I bet dollars to doughnuts you won't see
                  this level of perf:)   I am working on a blog post for
                  RHHI and RHEV + RHS performance where I am able to in
                  some cases get 2x+ the performance out of VMs / VM
                  storage.  I'd be happy to share my data / findings. <br>
                  <br>
                </blockquote>
                <br>
                Ben, we would like to hear more, so please do share your
                thoughts further. There are a fair number of users in
                the community who have this use-case and may have some
                interesting questions around the proposed method. <br>
                <br>
                Shyam <br>
                ______________________________<wbr>_________________ <br>
                Gluster-devel mailing list <br>
                <a class="m_5627811959350600145moz-txt-link-abbreviated"
                  href="mailto:Gluster-devel@gluster.org"
                  target="_blank" moz-do-not-send="true">Gluster-devel@gluster.org</a>
                <br>
                <a class="m_5627811959350600145moz-txt-link-freetext"
                  href="http://lists.gluster.org/mailman/listinfo/gluster-devel"
                  target="_blank" moz-do-not-send="true">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a>
                <br>
              </blockquote>
              <br>
            </div>
            <br>
            ______________________________<wbr>_________________<br>
            Gluster-devel mailing list<br>
            <a href="mailto:Gluster-devel@gluster.org"
              moz-do-not-send="true">Gluster-devel@gluster.org</a><br>
            <a
              href="http://lists.gluster.org/mailman/listinfo/gluster-devel"
              rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
          </blockquote>
        </div>
      </div>
    </blockquote>
    <br>
  </body>
</html>