<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 9, 2017 at 12:57 PM, Ingard Mevåg <span dir="ltr"><<a href="mailto:ingard@jotta.no" target="_blank">ingard@jotta.no</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">You're not counting wrong. We won't necessarily transfer all of these files to one volume though. It was more an example of the distribution of file sizes.<div>But as you say healing might be a problem, but then again. This is archive storage. We're after the highest possible capacity, not necessarily performance.</div><div>If you take a look at the profile output you'll see that MKDIR, CREATE and XATTROP are the operations with the highest latency and I guess that is due to the number of bricks? ( 180 )</div><div>But I thought that number wouldnt be too high to get at least a little bit higher troughput?</div></div></blockquote><div><br></div><div>MKDIR is taking a long time most likely because brick process is taking long to execute the syscall. You may have to figure out why that is the case. There are healing enhancements planned to slowly increase performance release by release. We will take a note of this one. Thanks for the inputs.<br></div><div> <br></div><div>In our labs we use "<span style="font-size:11pt;font-family:arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline">strace -ff -T -p <pid-of-brick> -o <path-to-the-file-where-you-want-the-output-saved>" </span>to gather data about time it takes for doing syscalls and inspect why some syscalls are taking so much time. In most cases we find that the FS is configured wrong or something is wrong with the disk. Please note that this slows down things really bad, but it always found the reason for the problem so far. Since this is in production I would find the exact test that would slow things down, then do this strace only for the duration which recreates the problem and stop strace after collecting the data.<br><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><span class="gmail-HOEnZb"><font color="#888888"><div><br></div><div>ingard</div></font></span><div><div class="gmail-h5"><div class="gmail_extra"><br><div class="gmail_quote">2017-05-08 15:19 GMT+02:00 Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">There are 300M files right I am not counting wrong?<br>
With that file profile I would never use EC in first place.<br>
Maybe you can pack the files into tar archives or similar before<br>
migrating to gluster?<br>
It will take ages to heal a drive with that file count...<br>
<div><div class="gmail-m_-3201785259405497658h5"><br>
On Mon, May 8, 2017 at 3:59 PM, Ingard Mevåg <<a href="mailto:ingard@jotta.no" target="_blank">ingard@jotta.no</a>> wrote:<br>
> With attachments :)<br>
><br>
> 2017-05-08 14:57 GMT+02:00 Ingard Mevåg <<a href="mailto:ingard@jotta.no" target="_blank">ingard@jotta.no</a>>:<br>
>><br>
>> Hi<br>
>><br>
>> We've got 3 servers with 60 drives each setup with an EC volume running on<br>
>> gluster 3.10.0<br>
>> The servers are connected via 10gigE.<br>
>><br>
>> We've done the changes recommended here :<br>
>> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1349953#c17" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/sh<wbr>ow_bug.cgi?id=1349953#c17</a> and we're able to<br>
>> max out the network with the iozone tests referenced in the same ticket.<br>
>><br>
>> However for small files we are getting 3-5 MB/s with the smallfile_cli.py<br>
>> tool. For instance:<br>
>> python smallfile_cli.py --operation create --threads 32 --file-size 100<br>
>> --files 1000 --top /tmp/dfs-archive-001/<br>
>> .<br>
>> .<br>
>> total threads = 32<br>
>> total files = 31294<br>
>> total data = 2.984 GB<br>
>> 97.79% of requested files processed, minimum is 90.00<br>
>> 785.542908 sec elapsed time<br>
>> 39.837416 files/sec<br>
>> 39.837416 IOPS<br>
>> 3.890373 MB/sec<br>
>> .<br>
>><br>
>> We're going to use these servers for archive purposes, so the files will<br>
>> be moved there and accessed very little. After noticing our migration tool<br>
>> performing very badly we did some analyses on the data actually being moved<br>
>> :<br>
>><br>
>> Bucket 31808791 (16.27 GB) :: 0 bytes - 1.00 KB<br>
>> Bucket 49448258 (122.89 GB) :: 1.00 KB - 5.00 KB<br>
>> Bucket 13382242 (96.92 GB) :: 5.00 KB - 10.00 KB<br>
>> Bucket 13557684 (195.15 GB) :: 10.00 KB - 20.00 KB<br>
>> Bucket 22735245 (764.96 GB) :: 20.00 KB - 50.00 KB<br>
>> Bucket 15101878 (1041.56 GB) :: 50.00 KB - 100.00 KB<br>
>> Bucket 10734103 (1558.35 GB) :: 100.00 KB - 200.00 KB<br>
>> Bucket 17695285 (5773.74 GB) :: 200.00 KB - 500.00 KB<br>
>> Bucket 13632394 (10039.92 GB) :: 500.00 KB - 1.00 MB<br>
>> Bucket 21815815 (32641.81 GB) :: 1.00 MB - 2.00 MB<br>
>> Bucket 36940815 (117683.33 GB) :: 2.00 MB - 5.00 MB<br>
>> Bucket 13580667 (91899.10 GB) :: 5.00 MB - 10.00 MB<br>
>> Bucket 10945768 (232316.33 GB) :: 10.00 MB - 50.00 MB<br>
>> Bucket 1723848 (542581.89 GB) :: 50.00 MB - 9223372036.85 GB<br>
>><br>
>> So it turns out we've got a very large number of very small files being<br>
>> written to this volume.<br>
>> I've attached the volume config and 2 profiling runs so if someone wants<br>
>> to take a look and maybe give us some hints in terms of what volume settings<br>
>> will be best for writing a lot of small files that would be much<br>
>> appreciated.<br>
>><br>
>> kind regards<br>
>> ingard<br>
><br>
><br>
><br>
><br>
</div></div>> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</blockquote></div><br><div class="gmail-m_-3201785259405497658gmail_signature"><div dir="ltr"><div dir="ltr"><br></div></div></div>
</div></div></div></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div></div>