<div dir="auto">Where did you read 2k IOPS?<div dir="auto"><br></div><div dir="auto">Each disk is able to do about 75iops as I'm using SATA disk, getting even closer to 2000 it's impossible</div></div><div class="gmail_extra"><br><div class="gmail_quote">Il 13 ott 2017 9:42 AM, "Szymon Miotk" <<a href="mailto:szymon.miotk@gmail.com">szymon.miotk@gmail.com</a>> ha scritto:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Depends what you need.<br>
2K iops for small file writes is not a bad result.<br>
In my case I had a system that was just poorly written and it was<br>
using 300-1000 iops for constant operations and was choking on<br>
cleanup.<br>
<br>
<br>
On Thu, Oct 12, 2017 at 6:23 PM, Gandalf Corvotempesta<br>
<<a href="mailto:gandalf.corvotempesta@gmail.com">gandalf.corvotempesta@gmail.<wbr>com</a>> wrote:<br>
> So, even with latest version, gluster is still unusable with small files ?<br>
><br>
> 2017-10-12 10:51 GMT+02:00 Szymon Miotk <<a href="mailto:szymon.miotk@gmail.com">szymon.miotk@gmail.com</a>>:<br>
>> I've analyzed small files performance few months ago, because I had<br>
>> huge performance problems with small files writes on Gluster.<br>
>> The read performance has been improved in many ways in recent releases<br>
>> (md-cache, parallel-readdir, hot-tier).<br>
>> But write performance is more or less the same and you cannot go above<br>
>> 10K smallfiles create - even with SSD or Optane drives.<br>
>> Even ramdisk is not helping much here, because the bottleneck is not<br>
>> in the storage performance.<br>
>> Key problems I've noticed:<br>
>> - LOOKUPs are expensive, because there is separate query for every<br>
>> depth level of destination directory (md-cache helps here a bit,<br>
>> unless you are creating lot of directories). So the deeper the<br>
>> directory structure, the worse.<br>
>> - for every file created, Gluster creates another file in .glusterfs<br>
>> directory, doubling the required IO and network latency. What's worse,<br>
>> XFS, the recommended filesystem, doesn't like flat directory sturcture<br>
>> with thousands files in each directory. But that's exactly how Gluster<br>
>> stores its metadata in .glusterfs, so the performance decreases by<br>
>> 40-50% after 10M files.<br>
>> - complete directory structure is created on each of the bricks. So<br>
>> every mkdir results in io on every brick you have in the volume.<br>
>> - hot-tier may be great for improving reads, but for small files<br>
>> writes it actually kills performance even more.<br>
>> - FUSE driver requires context switch between userspace and kernel<br>
>> each time you create a file, so with small files the context switches<br>
>> are also taking their toll<br>
>><br>
>> The best results I got were:<br>
>> - create big file on Gluster, mount it as XFS over loopback interface<br>
>> - 13.5K smallfile writes. Drawback - you can use it only on one<br>
>> server, as XFS will crash when two servers will write to it.<br>
>> - use libgfapi - 20K smallfile writes performance. Drawback - no nice<br>
>> POSIX filesystem, huge CPU usage on Gluster server.<br>
>><br>
>> I was testing with 1KB files, so really small.<br>
>><br>
>> Best regards,<br>
>> Szymon Miotk<br>
>><br>
>> On Fri, Oct 6, 2017 at 4:43 PM, Gandalf Corvotempesta<br>
>> <<a href="mailto:gandalf.corvotempesta@gmail.com">gandalf.corvotempesta@gmail.<wbr>com</a>> wrote:<br>
>>> Any update about this?<br>
>>> I've seen some works about optimizing performance for small files, is<br>
>>> now gluster "usable" for storing, in example, Maildirs or git sources<br>
>>> ?<br>
>>><br>
>>> at least in 3.7 (or 3.8, I don't remember exactly), extracting kernel<br>
>>> sources took about 4-5 minutes.<br>
>>> ______________________________<wbr>_________________<br>
>>> Gluster-users mailing list<br>
>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</blockquote></div></div>