[Gluster-users] GlusterFS performance
Yavor Marinov
ymarinov at neterra.net
Thu May 23 14:16:42 UTC 2013
Here is the output of volume profile info after 1.1Gb file is copied
over mounted glusterfs on client:
[root at gfs1 data]# gluster volume profile test info
Brick: 93.123.32.41:/data
-------------------------
Cumulative Stats:
Block Size: 131072b+
No. of Reads: 0
No. of Writes: 8192
%-latency Avg-latency Min-Latency Max-Latency No. of
calls Fop
--------- ----------- ----------- -----------
------------ ----
0.00 0.00 us 0.00 us 0.00 us 1 RELEASE
0.02 114.00 us 35.00 us 208.00 us 5 LOOKUP
0.26 9306.00 us 9306.00 us 9306.00 us 1 CREATE
0.75 26260.00 us 26260.00 us 26260.00 us 1 FLUSH
98.97 424.91 us 165.00 us 14056.00 us 8192 WRITE
Duration: 18352 seconds
Data Read: 0 bytes
Data Written: 1073741824 bytes
Interval 14 Stats:
Block Size: 131072b+
No. of Reads: 0
No. of Writes: 5353
%-latency Avg-latency Min-Latency Max-Latency No. of
calls Fop
--------- ----------- ----------- -----------
------------ ----
0.00 0.00 us 0.00 us 0.00 us 1 RELEASE
1.22 26260.00 us 26260.00 us 26260.00 us 1 FLUSH
98.78 397.60 us 169.00 us 13423.00 us 5353 WRITE
Duration: 172 seconds
Data Read: 0 bytes
Data Written: 701628416 bytes
[root at gfs1 data]#
---
Find out about our new Cloud service - Cloudware.bg
<http://cloudware.bg/?utm_source=email&utm_medium=signature&utm_content=link&utm_campaign=newwebsite>
Access anywhere. Manage it yourself. Pay as you go.
------------------------------------------------------------------------
*Yavor Marinov*
System Administrator
Neterra Ltd.
Telephone: +359 2 975 16 16
Fax: +359 2 975 34 36
Mobile: +359 888 610 048
www.neterra.net <http://www.neterra.net>
On 05/23/2013 04:54 PM, Michael Brown wrote:
> That's only a single data point (one LOOKUP call) - this tells you
> nothing.
>
> You need to check after it's been running (and processing traffic) for
> a while.
>
> For example, here's the stats off one of my bricks (an SSD):
> Brick: fearless2:/export/bricks/500117310007a84c/glusterdata
> ------------------------------------------------------------
> Cumulative Stats:
> Block Size: 32b+ 64b+ 128b+
> No. of Reads: 0 0 1
> No. of Writes: 1 5634 4252
>
> Block Size: 256b+ 512b+ 2048b+
> No. of Reads: 0 1 0
> No. of Writes: 343 24 1
>
> Block Size: 4096b+ 8192b+ 16384b+
> No. of Reads: 8 7 10
> No. of Writes: 4 0 0
>
> Block Size: 32768b+ 65536b+ 131072b+
> No. of Reads: 25 165 436
> No. of Writes: 2 7 36
>
> %-latency Avg-latency Min-Latency Max-Latency No. of
> calls Fop
> --------- ----------- ----------- -----------
> ------------ ----
> 0.00 0.00 us 0.00 us 0.00 us
> 120 FORGET
> 0.00 0.00 us 0.00 us 0.00 us
> 2940 RELEASE
> 0.00 0.00 us 0.00 us 0.00 us 4554
> RELEASEDIR
> 0.00 104.00 us 104.00 us 104.00 us 1
> TRUNCATE
> 0.00 125.00 us 110.00 us 140.00 us
> 2 XATTROP
> 0.01 12.62 us 8.00 us 25.00 us
> 146 ACCESS
> 0.04 81.77 us 65.00 us 112.00 us 60
> SETXATTR
> 0.05 30.99 us 25.00 us 57.00 us
> 212 SETATTR
> 0.06 12.73 us 8.00 us 59.00 us
> 574 INODELK
> 0.09 188.43 us 140.00 us 244.00 us
> 60 CREATE
> 0.11 25.24 us 15.00 us 149.00 us
> 533 STATFS
> 0.12 260.72 us 206.00 us 430.00 us
> 60 MKDIR
> 0.26 11.34 us 5.00 us 127.00 us
> 2925 FLUSH
> 0.27 15.14 us 7.00 us 90.00 us
> 2274 ENTRYLK
> 0.36 102.52 us 81.00 us 161.00 us
> 442 RMDIR
> 0.63 27.61 us 17.00 us 606.00 us
> 2880 OPEN
> 0.76 171.58 us 91.00 us 5691.00 us
> 555 UNLINK
> 0.87 22.66 us 8.00 us 469.00 us
> 4812 READDIR
> 0.87 24.37 us 10.00 us 1302.00 us
> 4506 STAT
> 0.94 61.67 us 16.00 us 194.00 us 1917
> GETXATTR
> 1.06 51.20 us 10.00 us 224.00 us
> 2600 FSTAT
> 1.14 31.46 us 18.00 us 1016.00 us
> 4554 OPENDIR
> 2.56 31.19 us 18.00 us 4373.00 us
> 10304 WRITE
> 2.58 417.28 us 15.00 us 1860.00 us
> 776 READ
> 3.64 17.26 us 6.00 us 4824.00 us 26507
> FINODELK
> 24.03 146.42 us 49.00 us 9854.00 us 20622
> FXATTROP
> 26.67 652.70 us 42.00 us 89705.00 us 5134
> READDIRP
> 32.86 128.19 us 9.00 us 4617.00 us
> 32204 LOOKUP
>
>
> On 13-05-23 09:03 AM, Yavor Marinov wrote:
>> I've just enabled profiling of the volume and this is the information
>> from the profile info printed:
>>
>> [root at gfs1 ~]# gluster volume profile test info
>> Brick: 93.123.32.41:/data
>> -------------------------
>> Cumulative Stats:
>> %-latency Avg-latency Min-Latency Max-Latency No. of
>> calls Fop
>> --------- ----------- ----------- -----------
>> ------------ ----
>> 100.00 148.00 us 148.00 us 148.00 us
>> 1 LOOKUP
>>
>> Duration: 13950 seconds
>> Data Read: 0 bytes
>> Data Written: 0 bytes
>>
>> Interval 4 Stats:
>>
>> Duration: 7910 seconds
>> Data Read: 0 bytes
>> Data Written: 0 bytes
>>
>> [root at gfs1 ~]#
>>
>> Anything here that might be useful ?
>>
>>
>> ---
>> Find out about our new Cloud service - Cloudware.bg
>> <http://cloudware.bg/?utm_source=email&utm_medium=signature&utm_content=link&utm_campaign=newwebsite>
>> Access anywhere. Manage it yourself. Pay as you go.
>> ------------------------------------------------------------------------
>> *Yavor Marinov*
>> System Administrator
>>
>> Neterra Ltd.
>> Telephone: +359 2 975 16 16
>> Fax: +359 2 975 34 36
>> Mobile: +359 888 610 048
>> www.neterra.net <http://www.neterra.net>
>>
>>
>> On 05/23/2013 01:10 PM, Явор Маринов wrote:
>>> I've made a mistake we are using 30Mbit connectivity on all of the
>>> nodes. Below is a iperf test between the node and the client
>>>
>>> [root at gfs4 ~]# iperf -c 93.123.32.41
>>> ------------------------------------------------------------
>>> Client connecting to 93.123.32.41, TCP port 5001
>>> TCP window size: 23.2 KByte (default)
>>> ------------------------------------------------------------
>>> [ 3] local 93.123.32.44 port 49838 connected with 93.123.32.41 port
>>> 5001
>>> [ ID] Interval Transfer Bandwidth
>>> [ 3] 0.0-10.1 sec 49.9 MBytes 41.5 Mbits/sec
>>> [root at gfs4 ~]#
>>>
>>> But when trying to copy a 1Gb file on the client's mounted volume
>>> the speed between the client and the node is ~500kb/s
>>>
>>>
>>> ---
>>> Find out about our new Cloud service - Cloudware.bg
>>> <http://cloudware.bg/?utm_source=email&utm_medium=signature&utm_content=link&utm_campaign=newwebsite>
>>> Access anywhere. Manage it yourself. Pay as you go.
>>> ------------------------------------------------------------------------
>>> *Yavor Marinov*
>>> System Administrator
>>>
>>> Neterra Ltd.
>>> Telephone: +359 2 975 16 16
>>> Fax: +359 2 975 34 36
>>> Mobile: +359 888 610 048
>>> www.neterra.net <http://www.neterra.net>
>>>
>>>
>>> On 05/23/2013 12:16 PM, Nux! wrote:
>>>> On 23.05.2013 09:41, Явор Маринов wrote:
>>>>> Thanks for your reply.
>>>>>
>>>>> No matter how many nodes (currently the volume is only with its own
>>>>> node) the speed is really slow. For testing purposes, i made a volume
>>>>> with only one node, without any replication - however the speed is
>>>>> still ~500kb/s. The cloud servers are limited to 30Gbit/s but still
>>>>> the traffic when writing to the node is ~500kb/s
>>>>>
>>>>> i'm using 3.3.1 glusterfsd with kernel 2.6.18-348.el5xen and i need
>>>>> to know if the the problem is within the kernel.
>>>>
>>>> I don't think it is a problem with gluster; I never used el5 for
>>>> this, but I doubt there's an inherent problem with it either. That
>>>> speed limit looks odd to me and I think it's somewhere in your setup.
>>>> Have you done any actual speed tests in the VMs?
>>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
> --
> Michael Brown | `One of the main causes of the fall of
> Systems Consultant | the Roman Empire was that, lacking zero,
> Net Direct Inc. | they had no way to indicate successful
> ☎: +1 519 883 1172 x5106 | termination of their C programs.' - Firth
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130523/ca8c6028/attachment.html>
More information about the Gluster-users
mailing list