[Gluster-users] Extremely low performance - am I doing somethingwrong?

Strahil hunter86_bg at yahoo.com
Wed Jul 3 15:55:09 UTC 2019


Check the following link (4.1)  for the optimal gluster volume settings.
They are quite safe.

Gluster  provides a group called  virt (/var/lib/glusterd/groups/virt)   and can be applied via  'gluster volume set VOLNAME group virt'

Then try again.

Best Regards,
Strahil NikolovOn Jul 3, 2019 11:39, Vladimir Melnik <v.melnik at tucha.ua> wrote:
>
> Dear colleagues, 
>
> I have a lab with a bunch of virtual machines (the virtualization is 
> provided by KVM) running on the same physical host. 4 of these VMs are 
> working as a GlusterFS cluster and there's one more VM that works as a 
> client. I'll specify all the packages' versions in the ending of this 
> message. 
>
> I created 2 volumes - one is having type "Distributed-Replicate" and 
> another one is "Distribute". The problem is that both of volumes are 
> showing really poor performance. 
>
> Here's what I see on the client: 
> $ mount | grep gluster 
> 10.13.1.16:storage1 on /mnt/glusterfs1 type fuse.glusterfs(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) 
> 10.13.1.16:storage2 on /mnt/glusterfs2 type fuse.glusterfs(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) 
>
> $ for i in {1..5}; do { dd if=/dev/zero of=/mnt/glusterfs1/test.tmp bs=1M count=10 oflag=sync; rm -f /mnt/glusterfs1/test.tmp; } done 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 1.47936 s, 7.1 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 1.62546 s, 6.5 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 1.71229 s, 6.1 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 1.68607 s, 6.2 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 1.82204 s, 5.8 MB/s 
>
> $ for i in {1..5}; do { dd if=/dev/zero of=/mnt/glusterfs2/test.tmp bs=1M count=10 oflag=sync; rm -f /mnt/glusterfs2/test.tmp; } done 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 1.15739 s, 9.1 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 0.978528 s, 10.7 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 0.910642 s, 11.5 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 0.998249 s, 10.5 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 1.03377 s, 10.1 MB/s 
>
> The distributed one shows a bit better performance than the 
> distributed-replicated one, but it's still poor. :-( 
>
> The disk storage itself is OK, here's what I see on each of 4 GlusterFS 
> servers: 
> for i in {1..5}; do { dd if=/dev/zero of=/mnt/storage1/test.tmp bs=1M count=10 oflag=sync; rm -f /mnt/storage1/test.tmp; } done 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 0.0656698 s, 160 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 0.0476927 s, 220 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 0.036526 s, 287 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 0.0329145 s, 319 MB/s 
> 10+0 records in 
> 10+0 records out 
> 10485760 bytes (10 MB) copied, 0.0403988 s, 260 MB/s 
>
> The network between all 5 VMs is OK, they all are working on the same 
> physical host. 
>
> Can't understand, what am I doing wrong. :-( 
>
> Here's the detailed info about the volumes: 
> Volume Name: storage1 
> Type: Distributed-Replicate 
> Volume ID: a42e2554-99e5-4331-bcc4-0900d002ae32 
> Status: Started 
> Snapshot Count: 0 
> Number of Bricks: 2 x (2 + 1) = 6 
> Transport-type: tcp 
> Bricks: 
> Brick1: gluster1.k8s.maitre-d.tucha.ua:/mnt/storage1/brick1 
> Brick2: gluster2.k8s.maitre-d.tucha.ua:/mnt/storage1/brick2 
> Brick3: gluster3.k8s.maitre-d.tucha.ua:/mnt/storage1/brick_arbiter (arbiter) 
> Brick4: gluster3.k8s.maitre-d.tucha.ua:/mnt/storage1/brick3 
> Brick5: gluster4.k8s.maitre-d.tucha.ua:/mnt/storage1/brick4 
> Brick6: gluster1.k8s.maitre-d.tucha.ua:/mnt/storage1/brick_arbiter (arbiter) 
> Options Reconfigured: 
> transport.address-family: inet 
> nfs.disable: on 
> performance.client-io-threads: off 
>
> Volume Name: storage2 
> Type: Distribute 
> Volume ID: df4d8096-ad03-493e-9e0e-586ce21fb067 
> Status: Started 
> Snapshot Count: 0 
> Number of Bricks: 4 
> Transport-type: tcp 
> Bricks: 
> Brick1: gluster1.k8s.maitre-d.tucha.ua:/mnt/storage2 
> Brick2: gluster2.k8s.maitre-d.tucha.ua:/mnt/storage2 
> Brick3: gluster3.k8s.maitre-d.tucha.ua:/mnt/storage2 
> Brick4: gluster4.k8s.maitre-d.tucha.ua:/mnt/storage2 
> Options Reconfigured: 
> transport.address-family: inet 
> nfs.disable: on 
>
> The OS is CentOS Linux release 7.6.1810. The packages I'm using are: 
> glusterfs-6.3-1.el7.x86_64 
> glusterfs-api-6.3-1.el7.x86_64 
> glusterfs-cli-6.3-1.el7.x86_64 
> glusterfs-client-xlators-6.3-1.el7.x86_64 
> glusterfs-fuse-6.3-1.el7.x86_64 
> glusterfs-libs-6.3-1.el7.x86_64 
> glusterfs-server-6.3-1.el7.x86_64 
> kernel-3.10.0-327.el7.x86_64 
> kernel-3.10.0-514.2.2.el7.x86_64 
> kernel-3.10.0-957.12.1.el7.x86_64 
> kernel-3.10.0-957.12.2.el7.x86_64 
> kernel-3.10.0-957.21.3.el7.x86_64 
> kernel-tools-3.10.0-957.21.3.el7.x86_64 
> kernel-tools-libs-3.10.0-957.21.3.el7.x86_6 
>
> Please, be so kind as to help me to understand, did I do it wrong or 
> that's quite normal performance of GlusterFS? 
>
> Thanks in advance! 
> _______________________________________________ 
> Gluster-users mailing list 
> Gluster-users at gluster.org 
> https://lists.gluster.org/mailman/listinfo/gluster-users 


More information about the Gluster-users mailing list