[Gluster-users] Increase or performance tune READ perf for glusterfs distributed volume
Karan Sandha
ksandha at redhat.com
Wed Mar 8 10:48:00 UTC 2017
Hi Deepak,
Are you reading a small file data-set or large files data-set and
secondly, volume is mounted using which protocol?
for small files data-set :-
**
*gluster volume set *vol-name* cluster.lookup-optimize on (default=off)*
*
gluster volume set *vol-name* server.event-threads 4 (default=2)
gluster volume set *vol-name* client.event-threads 4 *(default=2)*
*
and do a re-balance on the volume and then check the performance, we
generally see a performance bump up when we turn these parameter on.
Thanks & regards
Karan Sandha
On 03/08/2017 02:21 AM, Deepak Naidu wrote:
>
> Is there are any tuning param for READ, I need to set to get maximum
> throughput on glusterfs distributed volume read performance.
> Currently, I am trying to compare this with my local SSD Disk performance.
>
> ·My local SSD(/dev/sdb) can random read 6.3TB in 56 minutes on XFS
> filesystem.
>
> ·I have 2x node distributed glusterfs volume. When I read the same
> workload, it takes around 63 minutes.
>
> ·Network is IPoIB using RDMA. Infiniband network is 1x 100 Gb/sec (4X EDR)
>
> Any suggestion is appreciated.
>
> --
>
> Deepak
>
> ------------------------------------------------------------------------
> This email message is for the sole use of the intended recipient(s)
> and may contain confidential information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the
> intended recipient, please contact the sender by reply email and
> destroy all copies of the original message.
> ------------------------------------------------------------------------
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170308/a71a734e/attachment.html>
More information about the Gluster-users
mailing list