[Gluster-users] Expected performance for WORM scenario

Andreas Ericsson andreas.ericsson at findity.com
Wed Mar 14 09:27:37 UTC 2018


I no longer have the volume lying around. The most interesting one was a
2GB volume created on ramdisk for a single node. If I can't get that to go
faster than 3MB/sec for writes, I figured I wouldn't bother further.

I was using gluster fuse fs 3.10.7. Everything was running on ubuntu 16.04
servers.

On 12 March 2018 at 15:30, Nithya Balachandran <nbalacha at redhat.com> wrote:

> Hi,
>
> Can you send us the following details:
> 1. gluster volume info
> 2. What client you are using to run this?
>
> Thanks,
> Nithya
>
> On 12 March 2018 at 18:16, Andreas Ericsson <andreas.ericsson at findity.com>
> wrote:
>
>> Heya fellas.
>>
>> I've been struggling quite a lot to get glusterfs to perform even
>> halfdecently with a write-intensive workload. Testnumbers are from gluster
>> 3.10.7.
>>
>> We store a bunch of small files in a doubly-tiered sha1 hash fanout
>> directory structure. The directories themselves aren't overly full. Most of
>> the data we write to gluster is "write once, read probably never", so 99%
>> of all operations are of the write variety.
>>
>> The network between servers is sound. 10gb network cards run over a 10gb
>> (doh) switch. iperf reports 9.86Gbit/sec. ping reports a latency of 0.1 -
>> 0.2 ms. There is no firewall, no packet inspection and no nothing between
>> the servers, and the 10gb switch is the only path between the two machines,
>> so traffic isn't going over some 2mbit wifi by accident.
>>
>> Our main storage has always been really slow (write speed of roughly
>> 1.5MiB/s), but I had long attributed that to the extremely slow disks we
>> use to back it, so now that we're expanding I set up a new gluster cluster
>> with state of the art NVMe SSD drives to boost performance. However,
>> performance only hopped up to around 2.1MiB/s. Perplexed, I tried it first
>> with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My
>> last resort was to use a single node running on ramdisk, just to 100%
>> exclude any network shenanigans, but the write performance stayed at an
>> absolutely abysmal 3MiB/s.
>>
>> Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I
>> don't actually remember the numbers, but my test that took 2 minutes with
>> gluster completed before I had time to blink). Writing straight to the
>> backing SSD drives gives me a throughput of 96MiB/sec.
>>
>> The test itself writes 8494 files that I simply took randomly from our
>> production environment, comprising a total of 63.4MiB (so average file size
>> is just under 8k. Most are actually close to 4k though, with the occasional
>> 2-or-so MB file in there.
>>
>> I have googled and read a *lot* of performance-tuning guides, but the
>> 3MiB/sec on single-node ramdisk seems to be far beyond the crippling one
>> can cause by misconfiguration of a single system.
>>
>> With this in mind; What sort of write performance can one reasonably hope
>> to get with gluster? Assume a 3-node cluster running on top of (small)
>> ramdisks on a fast and stable network. Is it just a bad fit for our
>> workload?
>>
>> /Andreas
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180314/38001e29/attachment.html>


More information about the Gluster-users mailing list