[Gluster-users] Slow performance of gluster volume
Krutika Dhananjay
kdhananj at redhat.com
Tue Sep 5 02:57:00 UTC 2017
I'm assuming you are using this volume to store vm images, because I see
shard in the options list.
Speaking from shard translator's POV, one thing you can do to improve
performance is to use preallocated images.
This will at least eliminate the need for shard to perform multiple steps
as part of the writes - such as creating the shard and then writing to it
and then updating the aggregated file size - all of which require one
network call each, which further get blown up once they reach AFR
(replicate) into many more network calls.
Second, I'm assuming you're using the default shard block size of 4MB (you
can confirm this using `gluster volume get <VOL> shard-block-size`). In our
tests, we've found that larger shard sizes perform better. So maybe change
the shard-block-size to 64MB (`gluster volume set <VOL> shard-block-size
64MB`).
Third, keep stat-prefetch enabled. We've found that qemu sends quite a lot
of [f]stats which can be served from the (md)cache to improve performance.
So enable that.
Also, could you also enable client-io-threads and see if that improves
performance?
Which version of gluster are you using BTW?
-Krutika
On Tue, Sep 5, 2017 at 4:32 AM, Abi Askushi <rightkicktech at gmail.com> wrote:
> Hi all,
>
> I have a gluster volume used to host several VMs (managed through oVirt).
> The volume is a replica 3 with arbiter and the 3 servers use 1 Gbit
> network for the storage.
>
> When testing with dd (dd if=/dev/zero of=testfile bs=1G count=1
> oflag=direct) out of the volume (e.g. writing at /root/) the performance of
> the dd is reported to be ~ 700MB/s, which is quite decent. When testing the
> dd on the gluster volume I get ~ 43 MB/s which way lower from the previous.
> When testing with dd the gluster volume, the network traffic was not
> exceeding 450 Mbps on the network interface. I would expect to reach near
> 900 Mbps considering that there is 1 Gbit of bandwidth available. This
> results having VMs with very slow performance (especially on their write
> operations).
>
> The full details of the volume are below. Any advise on what can be
> tweaked will be highly appreciated.
>
> Volume Name: vms
> Type: Replicate
> Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster0:/gluster/vms/brick
> Brick2: gluster1:/gluster/vms/brick
> Brick3: gluster2:/gluster/vms/brick (arbiter)
> Options Reconfigured:
> cluster.granular-entry-heal: enable
> performance.strict-o-direct: on
> network.ping-timeout: 30
> storage.owner-gid: 36
> storage.owner-uid: 36
> user.cifs: off
> features.shard: on
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> nfs.export-volumes: on
>
>
> Thanx,
> Alex
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170905/9c0dad3d/attachment.html>
More information about the Gluster-users
mailing list