<div dir="ltr">Hi Pascal,<div><br></div><div>Sorry for complete delay in this one. And thanks for testing out in different scenarios. Few questions before others can have a look and advice you.</div><div><br></div><div>1. What is the volume info output ?</div><div><br></div><div>2. Do you see any concerning logs in glusterfs log files?</div><div><br></div><div>3. Please use `gluster volume profile` while running the tests, and that gives a lot of information.</div><div><br></div><div>4. Considering you are using glusterfs-6.0, please take statedump of client process (on any node) before and after the test, so we can analyze the latency information of each translators.</div><div><br></div><div>With these information, I hope we will be in a better state to answer the questions.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 10, 2019 at 3:45 PM Pascal Suter <<a href="mailto:pascal.suter@dalco.ch">pascal.suter@dalco.ch</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">i continued my testing with 5 clients, all attached over 100Gbit/s <br>
omni-path via IP over IB. when i run the same iozone benchmark across <br>
all 5 clients where gluster is mounted using the glusterfs client, i get <br>
an aggretated write throughput of only about 400GB/s and an aggregated <br>
read throughput of 1.5GB/s. Each node was writing a single 200Gb file in <br>
16MB chunks and the files where distributed across all three bricks on <br>
the server.<br>
<br>
the connection was established over Omnipath for sure, as there is no <br>
other link between the nodes and server.<br>
<br>
i have no clue what i'm doing wrong here. i can't believe that this is a <br>
normal performance people would expect to see from gluster. i guess <br>
nobody would be using it if it was this slow.<br>
<br>
again, when written dreictly to the xfs filesystem on the bricks, i get <br>
over 6GB/s read and write throughput using the same benchmark.<br>
<br>
any advise is appreciated<br>
<br>
cheers<br>
<br>
Pascal<br>
<br>
On 04.04.19 12:03, Pascal Suter wrote:<br>
> I just noticed i left the most important parameters out :)<br>
><br>
> here's the write command with filesize and recordsize in it as well :)<br>
><br>
> ./iozone -i 0 -t 1 -F /mnt/gluster/storage/thread1 -+n -c -C -e -I -w <br>
> -+S 0 -s 200G -r 16384k<br>
><br>
> also i ran the benchmark without direct_io which resulted in an even <br>
> worse performance.<br>
><br>
> i also tried to mount the gluster volume via nfs-ganesha which further <br>
> reduced throughput down to about 450MB/s<br>
><br>
> if i run the iozone benchmark with 3 threads writing to all three <br>
> bricks directly (from the xfs filesystem) i get throughputs of around <br>
> 6GB/s .. if I run the same benchmark through gluster mounted locally <br>
> using the fuse client and with enough threads so that each brick gets <br>
> at least one file written to it, i end up seing throughputs around <br>
> 1.5GB/s .. that's a 4x decrease in performance. at it actually is the <br>
> same if i run the benchmark with less threads and files only get <br>
> written to two out of three bricks.<br>
><br>
> cpu load on the server is around 25% by the way, nicely distributed <br>
> across all available cores.<br>
><br>
> i can't believe that gluster should really be so slow and everybody is <br>
> just happily using it. any hints on what i'm doing wrong are very <br>
> welcome.<br>
><br>
> i'm using gluster 6.0 by the way.<br>
><br>
> regards<br>
><br>
> Pascal<br>
><br>
> On 03.04.19 12:28, Pascal Suter wrote:<br>
>> Hi all<br>
>><br>
>> I am currently testing gluster on a single server. I have three <br>
>> bricks, each a hardware RAID6 volume with thin provisioned LVM that <br>
>> was aligned to the RAID and then formatted with xfs.<br>
>><br>
>> i've created a distributed volume so that entire files get <br>
>> distributed across my three bricks.<br>
>><br>
>> first I ran a iozone benchmark across each brick testing the read and <br>
>> write perofrmance of a single large file per brick<br>
>><br>
>> i then mounted my gluster volume locally and ran another iozone run <br>
>> with the same parameters writing a single file. the file went to <br>
>> brick 1 which, when used driectly, would write with 2.3GB/s and read <br>
>> with 1.5GB/s. however, through gluster i got only 800MB/s read and <br>
>> 750MB/s write throughput<br>
>><br>
>> another run with two processes each writing a file, where one file <br>
>> went to the first brick and the other file to the second brick (which <br>
>> by itself when directly accessed wrote at 2.8GB/s and read at <br>
>> 2.7GB/s) resulted in 1.2GB/s of aggregated write and also aggregated <br>
>> read throughput.<br>
>><br>
>> Is this a normal performance i can expect out of a glusterfs or is it <br>
>> worth tuning in order to really get closer to the actual brick <br>
>> filesystem performance?<br>
>><br>
>> here are the iozone commands i use for writing and reading.. note <br>
>> that i am using directIO in order to make sure i don't get fooled by <br>
>> cache :)<br>
>><br>
>> ./iozone -i 0 -t 1 -F /mnt/brick${b}/thread1 -+n -c -C -e -I -w -+S 0 <br>
>> -s $filesize -r $recordsize > iozone-brick${b}-write.txt<br>
>><br>
>> ./iozone -i 1 -t 1 -F /mnt/brick${b}/thread1 -+n -c -C -e -I -w -+S 0 <br>
>> -s $filesize -r $recordsize > iozone-brick${b}-read.txt<br>
>><br>
>> cheers<br>
>><br>
>> Pascal<br>
>><br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Amar Tumballi (amarts)<br></div></div></div></div></div>