<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi Amar <br>
</p>
<p>thanks for rolling this back up. Actually i have done some more
benchmarking and fiddled with the config to finally reach a
performance figure i could live with. I now can squeeze about
3GB/s out of that server which seems to be close to what i can get
out of its network uplink (using IP over Omni-Path). The system is
now set up and in production so i can't run any benchmarks on it
anymore but i will get back at benchmarking in the near future to
test some storage related hardware, and i will try it with gluster
on top again. <br>
</p>
<p>embarassingly the biggest performance issue was that the default
installation of the server was running the "performance" profile
of tuned. once i switched it to "throughput-performance"
performance increased dramatically. <br>
</p>
<p>the volume info now looks pretty unspectacular: <br>
</p>
<p>Volume Name: storage<br>
Type: Distribute<br>
Volume ID: c81c7e46-add5-4d88-9945-24cf7947ef8c<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: themis01:/data/brick1/brick<br>
Brick2: themis01:/data/brick2/brick<br>
Brick3: themis01:/data/brick3/brick<br>
Options Reconfigured:<br>
transport.address-family: inet<br>
nfs.disable: on<br>
</p>
<p>thanks for pointing out gluster volume profile, i'll have a go
with it during my next benchmarking session. so far i was using
iostat to track brick-level io performance during my benchmarks. <br>
</p>
<p>the main question i wanted to ask was, if there is a general rule
of thumb, how much throughput of the original bare brick
throughput would be expected to be left over once gluster is added
on top of it. to give you an example: when I use a parallel
filesystem like Lustre or BeeGFS i usually expect to get at least
about 85% of the raw storage target throughput as aggregated
bandwidth over a multi-node test out of my Lustre or BeeGFS setup.
I consider any numbers below that to be too low and therefore will
have to dig into performance tuning to find the bottle neck. <br>
</p>
<p>i was hoping someone could give me a rule-of-thumb number for a
simple distributed gluster setup, like that 85% number i've
established for a parallel file system. <br>
</p>
<p>so at the moment my takeaway is, in a simple distributed volume
across 3 bricks with an aggregated bandwidth of 6GB/s i can expect
to get about 3GB/s aggregated bandwith out of the gluster mount,
given there are no bottle necks in the network. the 3GB/s is a
number conducted under ideal circumstances, meaning, i primed the
storage to make sure i could run a benchmark run using three
nodes, with each node running a single thread writing to a single
file and each file was located on another bricke. this yielded the
maximum perfomance as this was pure streaming IO without any
overlapping file writing to the bricks other than the overhead
created by gluster's own internal mechanisms. <br>
</p>
<p>Interestingly, the performance didn't drop much when i added
nodes and threads and introduced more random-ish io by having
several processes write to the same brick. So I assume, what
"eats" up the 50% performance in the end is probably Gluster
writing all these additional hidden files which I assume is some
sort of Metadata. This causes additional IO on the disk that i'm
streaming my one file to and therefore turns my streaming IO into
a random io load for the raid controller and underlying harddisks
which on spinning disks would have about the performance impact i
was seing in my benchmarks. <br>
</p>
<p>I have yet to try gluster on a Flash based brick and test its
performance there.. i would expect to see a better "efficiency"
than the 50% i've measured on this system here as random io vs.
streaming io should not make such a difference (or acutally almost
no difference at all) on a flash based storage. but that's me
guessing now. <br>
</p>
<p>so for the moment i'm fine but i would still be interested in
hearing ball-park figure "efficiency" numbers from others using
gluster in a similar setup. <br>
</p>
<p>cheers</p>
<p>Pascal <br>
</p>
<div class="moz-cite-prefix">On 01.05.19 14:55, Amar Tumballi
Suryanarayan wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAHxyDdPLcfHvkPZqJ86=rj=c64+pf2smU22to5WCdrBm5xcx5w@mail.gmail.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<div dir="ltr">Hi Pascal,
<div><br>
</div>
<div>Sorry for complete delay in this one. And thanks for
testing out in different scenarios. Few questions before
others can have a look and advice you.</div>
<div><br>
</div>
<div>1. What is the volume info output ?</div>
<div><br>
</div>
<div>2. Do you see any concerning logs in glusterfs log files?</div>
<div><br>
</div>
<div>3. Please use `gluster volume profile` while running the
tests, and that gives a lot of information.</div>
<div><br>
</div>
<div>4. Considering you are using glusterfs-6.0, please take
statedump of client process (on any node) before and after the
test, so we can analyze the latency information of each
translators.</div>
<div><br>
</div>
<div>With these information, I hope we will be in a better state
to answer the questions.</div>
<div><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, Apr 10, 2019 at 3:45
PM Pascal Suter <<a href="mailto:pascal.suter@dalco.ch"
moz-do-not-send="true">pascal.suter@dalco.ch</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">i
continued my testing with 5 clients, all attached over
100Gbit/s <br>
omni-path via IP over IB. when i run the same iozone benchmark
across <br>
all 5 clients where gluster is mounted using the glusterfs
client, i get <br>
an aggretated write throughput of only about 400GB/s and an
aggregated <br>
read throughput of 1.5GB/s. Each node was writing a single
200Gb file in <br>
16MB chunks and the files where distributed across all three
bricks on <br>
the server.<br>
<br>
the connection was established over Omnipath for sure, as
there is no <br>
other link between the nodes and server.<br>
<br>
i have no clue what i'm doing wrong here. i can't believe that
this is a <br>
normal performance people would expect to see from gluster. i
guess <br>
nobody would be using it if it was this slow.<br>
<br>
again, when written dreictly to the xfs filesystem on the
bricks, i get <br>
over 6GB/s read and write throughput using the same benchmark.<br>
<br>
any advise is appreciated<br>
<br>
cheers<br>
<br>
Pascal<br>
<br>
On 04.04.19 12:03, Pascal Suter wrote:<br>
> I just noticed i left the most important parameters out
:)<br>
><br>
> here's the write command with filesize and recordsize in
it as well :)<br>
><br>
> ./iozone -i 0 -t 1 -F /mnt/gluster/storage/thread1 -+n -c
-C -e -I -w <br>
> -+S 0 -s 200G -r 16384k<br>
><br>
> also i ran the benchmark without direct_io which resulted
in an even <br>
> worse performance.<br>
><br>
> i also tried to mount the gluster volume via nfs-ganesha
which further <br>
> reduced throughput down to about 450MB/s<br>
><br>
> if i run the iozone benchmark with 3 threads writing to
all three <br>
> bricks directly (from the xfs filesystem) i get
throughputs of around <br>
> 6GB/s .. if I run the same benchmark through gluster
mounted locally <br>
> using the fuse client and with enough threads so that
each brick gets <br>
> at least one file written to it, i end up seing
throughputs around <br>
> 1.5GB/s .. that's a 4x decrease in performance. at it
actually is the <br>
> same if i run the benchmark with less threads and files
only get <br>
> written to two out of three bricks.<br>
><br>
> cpu load on the server is around 25% by the way, nicely
distributed <br>
> across all available cores.<br>
><br>
> i can't believe that gluster should really be so slow and
everybody is <br>
> just happily using it. any hints on what i'm doing wrong
are very <br>
> welcome.<br>
><br>
> i'm using gluster 6.0 by the way.<br>
><br>
> regards<br>
><br>
> Pascal<br>
><br>
> On 03.04.19 12:28, Pascal Suter wrote:<br>
>> Hi all<br>
>><br>
>> I am currently testing gluster on a single server. I
have three <br>
>> bricks, each a hardware RAID6 volume with thin
provisioned LVM that <br>
>> was aligned to the RAID and then formatted with xfs.<br>
>><br>
>> i've created a distributed volume so that entire
files get <br>
>> distributed across my three bricks.<br>
>><br>
>> first I ran a iozone benchmark across each brick
testing the read and <br>
>> write perofrmance of a single large file per brick<br>
>><br>
>> i then mounted my gluster volume locally and ran
another iozone run <br>
>> with the same parameters writing a single file. the
file went to <br>
>> brick 1 which, when used driectly, would write with
2.3GB/s and read <br>
>> with 1.5GB/s. however, through gluster i got only
800MB/s read and <br>
>> 750MB/s write throughput<br>
>><br>
>> another run with two processes each writing a file,
where one file <br>
>> went to the first brick and the other file to the
second brick (which <br>
>> by itself when directly accessed wrote at 2.8GB/s and
read at <br>
>> 2.7GB/s) resulted in 1.2GB/s of aggregated write and
also aggregated <br>
>> read throughput.<br>
>><br>
>> Is this a normal performance i can expect out of a
glusterfs or is it <br>
>> worth tuning in order to really get closer to the
actual brick <br>
>> filesystem performance?<br>
>><br>
>> here are the iozone commands i use for writing and
reading.. note <br>
>> that i am using directIO in order to make sure i
don't get fooled by <br>
>> cache :)<br>
>><br>
>> ./iozone -i 0 -t 1 -F /mnt/brick${b}/thread1 -+n -c
-C -e -I -w -+S 0 <br>
>> -s $filesize -r $recordsize >
iozone-brick${b}-write.txt<br>
>><br>
>> ./iozone -i 1 -t 1 -F /mnt/brick${b}/thread1 -+n -c
-C -e -I -w -+S 0 <br>
>> -s $filesize -r $recordsize >
iozone-brick${b}-read.txt<br>
>><br>
>> cheers<br>
>><br>
>> Pascal<br>
>><br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org"
target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
>> <a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org"
target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
> <a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank"
moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
<br>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr" class="gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>Amar Tumballi (amarts)<br>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</body>
</html>