[Gluster-users] Glusterfs performance tweaks

Punit Dambiwal hypunit at gmail.com
Thu Apr 9 01:55:38 UTC 2015


Hi Vijay,

If i run the same command directly on the brick...

[root at cpu01 1]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync
4096+0 records in
4096+0 records out
268435456 bytes (268 MB) copied, 16.8022 s, 16.0 MB/s
[root at cpu01 1]# pwd
/bricks/1
[root at cpu01 1]#

[image: Inline image 1]

On Wed, Apr 8, 2015 at 6:44 PM, Vijay Bellur <vbellur at redhat.com> wrote:

> On 04/08/2015 02:57 PM, Punit Dambiwal wrote:
>
>> Hi,
>>
>> I am getting very slow throughput in the glusterfs (dead slow...even
>> SATA is better) ... i am using all SSD in my environment.....
>>
>> I have the following setup :-
>> A. 4* host machine with Centos 7(Glusterfs 3.6.2 | Distributed
>> Replicated | replica=2)
>> B. Each server has 24 SSD as bricks…(Without HW Raid | JBOD)
>> C. Each server has 2 Additional ssd for OS…
>> D. Network 2*10G with bonding…(2*E5 CPU and 64GB RAM)
>>
>> Note :- Performance/Throughput slower then Normal SATA 7200 RPM…even i
>> am using all SSD in my ENV..
>>
>> Gluster Volume options :-
>>
>> +++++++++++++++
>> Options Reconfigured:
>> performance.nfs.write-behind-window-size: 1024MB
>> performance.io-thread-count: 32
>> performance.cache-size: 1024MB
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> diagnostics.count-fop-hits: on
>> diagnostics.latency-measurement: on
>> nfs.disable: on
>> user.cifs: enable
>> auth.allow: *
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> cluster.eager-lock: enable
>> network.remote-dio: enable
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> server.allow-insecure: on
>> network.ping-timeout: 0
>> diagnostics.brick-log-level: INFO
>> +++++++++++++++++++
>>
>> Test with SATA and Glusterfs SSD….
>> ———————
>> Dell EQL (SATA disk 7200 RPM)
>> —-
>> [root at mirror ~]#
>> 4096+0 records in
>> 4096+0 records out
>> 268435456 bytes (268 MB) copied, 20.7763 s, 12.9 MB/s
>> [root at mirror ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync
>> 4096+0 records in
>> 4096+0 records out
>> 268435456 bytes (268 MB) copied, 23.5947 s, 11.4 MB/s
>>
>> GlsuterFS SSD
>>>> [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync
>> 4096+0 records in
>> 4096+0 records out
>> 268435456 bytes (268 MB) copied, 66.2572 s, 4.1 MB/s
>> [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync
>> 4096+0 records in
>> 4096+0 records out
>> 268435456 bytes (268 MB) copied, 62.6922 s, 4.3 MB/s
>> ————————
>>
>> Please let me know what i should do to improve the performance of my
>> glusterfs…
>>
>
>
> What is the throughput that you get when you run these commands on the
> disks directly without gluster in the picture?
>
> By running dd with dsync you are ensuring that there is no buffering
> anywhere in the stack and that is the reason why low throughput is being
> observed.
>
> -Vijay
>
> -Vijay
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150409/bb305588/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 11395 bytes
Desc: not available
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150409/bb305588/attachment.png>


More information about the Gluster-users mailing list