[Gluster-users] Horrible Gluster Performance
Philip
flips01 at googlemail.com
Fri Apr 13 10:10:12 UTC 2012
Am 13. April 2012 11:58 schrieb Jerker Nyberg <jerker at update.uu.se>:
> On Fri, 13 Apr 2012, Philip wrote:
>
> Does anyone have an idea what could be the reason for such a bad
>> performance? 22 Disks in a RAID10 should deliver *way* more throughput.
>>
>
> You may already have done so but you can check IO-utilization of the
> devices with the flag "-x" to "iostat" like for example "iostat -x 2" over
> a two second interval. Check percentage utilization in the "%util" column
> to the right. If you are closer to 100 than 0 then they (the disk
> subsystem) might actually be busy.
>
> --jerker
> ______________________________**_________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/**mailman/listinfo/gluster-users<http://gluster.org/cgi-bin/mailman/listinfo/gluster-users>
>
Here is the output (Outgoing bandwidth is currently at 380 Mbps):
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
sdb 1.00 0.00 129.50 0.00 42624.00 0.00
329.14 1.53 10.97 3.63 47.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.06 0.00 0.53 2.82 0.00 96.60
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 0.00 0.50 0.00 0.50 0.00 8.00
16.00 0.00 0.00 0.00 0.00
sdb 1.00 0.00 184.00 47.50 64084.00 29412.00
403.87 1.81 8.29 2.16 50.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.06 0.00 0.59 2.50 0.00 96.85
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
sdb 1.00 0.00 156.50 0.00 54944.00 0.00
351.08 1.30 8.28 2.62 41.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.21 0.00 0.48 1.61 0.00 97.70
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120413/eb2f5abd/attachment.html>
More information about the Gluster-users
mailing list