[Gluster-users] Gluster not saturating 10gb network

Kaamesh Kamalaaharan kaamesh at novocraft.com
Thu Aug 4 09:23:03 UTC 2016


hi,
thanks for the reply. I have hardware raid 5  storage servers with 4TB WD
red drives. I think they are capable of 6GB/s transfers so it shouldnt be a
drive speed issue. Just for testing i tried to do a dd test directy into
the brick mounted from the storage server itself and got around 800mb/s
transfer rate which is double what i get when the brick is mounted on the
client. Are there any other options or tests that i can perform to figure
out the root cause of my problem as i have exhaused most google searches
and tests.

Kaamesh

On Wed, Aug 3, 2016 at 10:58 PM, Leno Vo <lenovolastname at yahoo.com> wrote:

> your 10G nic is capable, the problem is the disk speed, fix ur disk speed
> first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least.
>
>
> On Wednesday, August 3, 2016 2:40 AM, Kaamesh Kamalaaharan <
> kaamesh at novocraft.com> wrote:
>
>
> Hi ,
> I have gluster 3.6.2 installed on my server network. Due to internal
> issues we are not allowed to upgrade the gluster version. All the clients
> are on the same version of gluster. When transferring files  to/from the
> clients or between my nodes over the 10gb network, the transfer rate is
> capped at 450Mb/s .Is there any way to increase the transfer speeds for
> gluster mounts?
>
> Our server setup is as following:
>
> 2 gluster servers -gfs1 and gfs2
>  volume name : gfsvolume
> 3 clients - hpc1, hpc2,hpc3
> gluster volume mounted on /export/gfsmount/
>
>
>
>
> The following is the average results what i did so far:
>
> 1) test bandwith with iperf between all machines - 9.4 GiB/s
> 2) test write speed with dd
>
> dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1
>
> result=399Mb/s
>
>
> 3) test read speed with dd
>
> dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1
>
>
> result=284MB/s
>
>
> My gluster volume configuration:
>
>
> Volume Name: gfsvolume
>
> Type: Replicate
>
> Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
>
> Status: Started
>
> Number of Bricks: 1 x 2 = 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gfs1:/export/sda/brick
>
> Brick2: gfs2:/export/sda/brick
>
> Options Reconfigured:
>
> performance.quick-read: off
>
> network.ping-timeout: 30
>
> network.frame-timeout: 90
>
> performance.cache-max-file-size: 2MB
>
> cluster.server-quorum-type: none
>
> nfs.addr-namelookup: off
>
> nfs.trusted-write: off
>
> performance.write-behind-window-size: 4MB
>
> cluster.data-self-heal-algorithm: diff
>
> performance.cache-refresh-timeout: 60
>
> performance.cache-size: 1GB
>
> cluster.quorum-type: fixed
>
> auth.allow: 172.*
>
> cluster.quorum-count: 1
>
> diagnostics.latency-measurement: on
>
> diagnostics.count-fop-hits: on
>
> cluster.server-quorum-ratio: 50%
>
>
> Any help would be appreciated.
>
> Thanks,
>
> Kaamesh
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160804/b8116a3f/attachment.html>


More information about the Gluster-users mailing list