[Gluster-users] Poor performance with shard

Krutika Dhananjay kdhananj at redhat.com
Tue Sep 5 03:11:02 UTC 2017


Hi,

Speaking from shard translator's POV, one thing you can do to improve
performance is to use preallocated images.
This will at least eliminate the need for shard to perform multiple steps
as part of the writes - such as creating the shard and then writing to it
and then updating the aggregated file size - all of which require one
network call each, which further get blown up once they reach AFR
(replicate) into many more network calls. What this also means is that the
performance with and without shard will be the same with this change.

Also, could you also enable client-io-threads and see if that improves
performance?

There's a patch that is part of 3.11.1 that has been found to improve
performance for vm workloads based on our testing -
https://review.gluster.org/#/c/17391/
You can give this version a try.

-Krutika

On Mon, Sep 4, 2017 at 7:48 PM, Roei G <ganor.roei98 at gmail.com> wrote:

> Hey everyone!
> I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet
> connection.
>
> The storage is configured with 3 gluster volumes, every volume has 12
> bricks (4 bricks on every server, 1 per ssd in the server).
>
> With the 'features.shard' off option my writing speed (using the 'dd'
> command) is approximately 250 Mbs and when the feature is on the writing
> speed is around 130mbs.
>
> --------- gluster version 3.8.13 --------
>
> Volume name: data
> Number of bricks : 4 * 3 = 12
> Bricks:
> Brick1: server1:/brick/data1
> Brick2: server1:/brick/data2
> Brick3: server1:/brick/data3
> Brick4: server1:/brick/data4
> Brick5: server2:/brick/data1
> .
> .
> .
> Options reconfigure:
> Performance.strict-o-direct: off
> Cluster.nufa: off
> Features.shard-block-size: 512MB
> Features.shard: on
> Cluster.server-quorum-type: server
> Cluster.quorum-type: auto
> Cluster.eager-lock: enable
> Network.remote-dio: on
> Performance.readdir-ahead: on
>
> Any idea on how to improve my performance?
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170905/97b5eb0c/attachment.html>


More information about the Gluster-users mailing list