[Gluster-users] Very slow performance on Sharded GlusterFS

Mohammed Rafi K C rkavunga at redhat.com
Thu Jul 27 14:00:53 UTC 2017


The current sharding has very limited use cases like vmstore where the
clients accessing the sharded file will always be one. Krutuka will be
the right person to answer your questions .


Regards

Rafi KC


On 06/30/2017 04:28 PM, gencer at gencgiyen.com wrote:
>
> Hi,
>
>  
>
> I have an 2 nodes with 20 bricks in total (10+10).
>
>  
>
> First test:
>
>  
>
> 2 Nodes with Distributed – Striped – Replicated (2 x 2)
>
> 10GbE Speed between nodes
>
>  
>
> “dd” performance: 400mb/s and higher
>
> Downloading a large file from internet and directly to the gluster:
> 250-300mb/s
>
>  
>
> Now same test without Stripe but with sharding. This results are same
> when I set shard size 4MB or 32MB. (Again 2x Replica here)
>
>  
>
> Dd performance: 70mb/s
>
> Download directly to the gluster performance : 60mb/s
>
>  
>
> Now, If we do this test twice at the same time (two dd or two
> doewnload at the same time) it goes below 25/mb each or slower.
>
>  
>
> I thought sharding is at least equal or a little slower (maybe?) but
> these results are terribly slow.
>
>  
>
> I tried tuning (cache, window-size etc..). Nothing helps.
>
>  
>
> GlusterFS 3.11 and Debian 9 used. Kernel also tuned. Disks are “xfs”
> and 4TB each.
>
>  
>
> Is there any tweak/tuning out there to make it fast?
>
>  
>
> Or is this an expected behavior? If its, It is unacceptable. So slow.
> I cannot use this on production as it is terribly slow.
>
>  
>
> The reason behind I use shard instead of stripe is i would like to
> eleminate files that bigger than brick size.
>
>  
>
> Thanks,
>
> Gencer.
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170727/023694e5/attachment.html>


More information about the Gluster-users mailing list