[Gluster-users] performance - what can I expect

Amar Tumballi Suryanarayan atumball at redhat.com
Wed May 1 12:55:01 UTC 2019


Hi Pascal,

Sorry for complete delay in this one. And thanks for testing out in
different scenarios.  Few questions before others can have a look and
advice you.

1. What is the volume info output ?

2. Do you see any concerning logs in glusterfs log files?

3. Please use `gluster volume profile` while running the tests, and that
gives a lot of information.

4. Considering you are using glusterfs-6.0, please take statedump of client
process (on any node) before and after the test, so we can analyze the
latency information of each translators.

With these information, I hope we will be in a better state to answer the
questions.


On Wed, Apr 10, 2019 at 3:45 PM Pascal Suter <pascal.suter at dalco.ch> wrote:

> i continued my testing with 5 clients, all attached over 100Gbit/s
> omni-path via IP over IB. when i run the same iozone benchmark across
> all 5 clients where gluster is mounted using the glusterfs client, i get
> an aggretated write throughput of only about 400GB/s and an aggregated
> read throughput of 1.5GB/s. Each node was writing a single 200Gb file in
> 16MB chunks and the files where distributed across all three bricks on
> the server.
>
> the connection was established over Omnipath for sure, as there is no
> other link between the nodes and server.
>
> i have no clue what i'm doing wrong here. i can't believe that this is a
> normal performance people would expect to see from gluster. i guess
> nobody would be using it if it was this slow.
>
> again, when written dreictly to the xfs filesystem on the bricks, i get
> over 6GB/s read and write throughput using the same benchmark.
>
> any advise is appreciated
>
> cheers
>
> Pascal
>
> On 04.04.19 12:03, Pascal Suter wrote:
> > I just noticed i left the most important parameters out :)
> >
> > here's the write command with filesize and recordsize in it as well :)
> >
> > ./iozone -i 0 -t 1 -F /mnt/gluster/storage/thread1 -+n -c -C -e -I -w
> > -+S 0 -s 200G -r 16384k
> >
> > also i ran the benchmark without direct_io which resulted in an even
> > worse performance.
> >
> > i also tried to mount the gluster volume via nfs-ganesha which further
> > reduced throughput down to about 450MB/s
> >
> > if i run the iozone benchmark with 3 threads writing to all three
> > bricks directly (from the xfs filesystem) i get throughputs of around
> > 6GB/s .. if I run the same benchmark through gluster mounted locally
> > using the fuse client and with enough threads so that each brick gets
> > at least one file written to it, i end up seing throughputs around
> > 1.5GB/s .. that's a 4x decrease in performance. at it actually is the
> > same if i run the benchmark with less threads and files only get
> > written to two out of three bricks.
> >
> > cpu load on the server is around 25% by the way, nicely distributed
> > across all available cores.
> >
> > i can't believe that gluster should really be so slow and everybody is
> > just happily using it. any hints on what i'm doing wrong are very
> > welcome.
> >
> > i'm using gluster 6.0 by the way.
> >
> > regards
> >
> > Pascal
> >
> > On 03.04.19 12:28, Pascal Suter wrote:
> >> Hi all
> >>
> >> I am currently testing gluster on a single server. I have three
> >> bricks, each a hardware RAID6 volume with thin provisioned LVM that
> >> was aligned to the RAID and then formatted with xfs.
> >>
> >> i've created a distributed volume so that entire files get
> >> distributed across my three bricks.
> >>
> >> first I ran a iozone benchmark across each brick testing the read and
> >> write perofrmance of a single large file per brick
> >>
> >> i then mounted my gluster volume locally and ran another iozone run
> >> with the same parameters writing a single file. the file went to
> >> brick 1 which, when used driectly, would write with 2.3GB/s and read
> >> with 1.5GB/s. however, through gluster i got only 800MB/s read and
> >> 750MB/s write throughput
> >>
> >> another run with two processes each writing a file, where one file
> >> went to the first brick and the other file to the second brick (which
> >> by itself when directly accessed wrote at 2.8GB/s and read at
> >> 2.7GB/s) resulted in 1.2GB/s of aggregated write and also aggregated
> >> read throughput.
> >>
> >> Is this a normal performance i can expect out of a glusterfs or is it
> >> worth tuning in order to really get closer to the actual brick
> >> filesystem performance?
> >>
> >> here are the iozone commands i use for writing and reading.. note
> >> that i am using directIO in order to make sure i don't get fooled by
> >> cache :)
> >>
> >> ./iozone -i 0 -t 1 -F /mnt/brick${b}/thread1 -+n -c -C -e -I -w -+S 0
> >> -s $filesize -r $recordsize > iozone-brick${b}-write.txt
> >>
> >> ./iozone -i 1 -t 1 -F /mnt/brick${b}/thread1 -+n -c -C -e -I -w -+S 0
> >> -s $filesize -r $recordsize > iozone-brick${b}-read.txt
> >>
> >> cheers
> >>
> >> Pascal
> >>
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> https://lists.gluster.org/mailman/listinfo/gluster-users
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>

-- 
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190501/19c6cf32/attachment.html>


More information about the Gluster-users mailing list