[Gluster-users] Slow performance of gluster volume

Ben Turner bturner at redhat.com
Sun Sep 10 23:31:23 UTC 2017


----- Original Message -----
> From: "Abi Askushi" <rightkicktech at gmail.com>
> To: "Krutika Dhananjay" <kdhananj at redhat.com>
> Cc: "gluster-user" <gluster-users at gluster.org>
> Sent: Tuesday, September 5, 2017 5:02:46 AM
> Subject: Re: [Gluster-users] Slow performance of gluster volume
> 
> Hi Krutika,
> 
> I already have a preallocated disk on VM.
> Now I am checking performance with dd on the hypervisors which have the
> gluster volume configured.
> 
> I tried also several values of shard-block-size and I keep getting the same
> low values on write performance.
> Enabling client-io-threads also did not have any affect.
> 
> The version of gluster I am using is glusterfs 3.8.12 built on May 11 2017
> 18:46:20.
> The setup is a set of 3 Centos 7.3 servers and ovirt 4.1, using gluster as
> storage.
> 
> Below are the current settings:
> 
> 
> Volume Name: vms
> Type: Replicate
> Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster0:/gluster/vms/brick
> Brick2: gluster1:/gluster/vms/brick
> Brick3: gluster2:/gluster/vms/brick (arbiter)
> Options Reconfigured:
> server.event-threads: 4
> client.event-threads: 4
> performance.client-io-threads: on
> features.shard-block-size: 512MB
> cluster.granular-entry-heal: enable
> performance.strict-o-direct: on
> network.ping-timeout: 30
> storage.owner-gid: 36
> storage.owner-uid: 36
> user.cifs: off
> features.shard: on
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.stat-prefetch: on
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> nfs.export-volumes: on
> 
> 
> I observed that when testing with dd if=/dev/zero of=testfile bs=1G count=1 I
> get 65MB/s on the vms gluster volume (and the network traffic between the
> servers reaches ~ 500Mbps), while when testing with dd if=/dev/zero
> of=testfile bs=1G count=1 oflag=direct I get a consistent 10MB/s and the
> network traffic hardly reaching 100Mbps.

I have a replica 3 volume that I was seeing ~65 MB / sec on my VMs, I ended up upgrading to a newer version and now I get closer to 150-180 MB / sec writes.  Since you are using arbiter I would expect faster writes for you, what gluster version are you running?  What OS?

-b


> 
> Any other things one can do?
> 
> On Tue, Sep 5, 2017 at 5:57 AM, Krutika Dhananjay < kdhananj at redhat.com >
> wrote:
> 
> 
> 
> I'm assuming you are using this volume to store vm images, because I see
> shard in the options list.
> 
> Speaking from shard translator's POV, one thing you can do to improve
> performance is to use preallocated images.
> This will at least eliminate the need for shard to perform multiple steps as
> part of the writes - such as creating the shard and then writing to it and
> then updating the aggregated file size - all of which require one network
> call each, which further get blown up once they reach AFR (replicate) into
> many more network calls.
> 
> Second, I'm assuming you're using the default shard block size of 4MB (you
> can confirm this using `gluster volume get <VOL> shard-block-size`). In our
> tests, we've found that larger shard sizes perform better. So maybe change
> the shard-block-size to 64MB (`gluster volume set <VOL> shard-block-size
> 64MB`).
> 
> Third, keep stat-prefetch enabled. We've found that qemu sends quite a lot of
> [f]stats which can be served from the (md)cache to improve performance. So
> enable that.
> 
> Also, could you also enable client-io-threads and see if that improves
> performance?
> 
> Which version of gluster are you using BTW?
> 
> -Krutika
> 
> 
> On Tue, Sep 5, 2017 at 4:32 AM, Abi Askushi < rightkicktech at gmail.com >
> wrote:
> 
> 
> 
> Hi all,
> 
> I have a gluster volume used to host several VMs (managed through oVirt).
> The volume is a replica 3 with arbiter and the 3 servers use 1 Gbit network
> for the storage.
> 
> When testing with dd (dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct)
> out of the volume (e.g. writing at /root/) the performance of the dd is
> reported to be ~ 700MB/s, which is quite decent. When testing the dd on the
> gluster volume I get ~ 43 MB/s which way lower from the previous. When
> testing with dd the gluster volume, the network traffic was not exceeding
> 450 Mbps on the network interface. I would expect to reach near 900 Mbps
> considering that there is 1 Gbit of bandwidth available. This results having
> VMs with very slow performance (especially on their write operations).
> 
> The full details of the volume are below. Any advise on what can be tweaked
> will be highly appreciated.
> 
> Volume Name: vms
> Type: Replicate
> Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster0:/gluster/vms/brick
> Brick2: gluster1:/gluster/vms/brick
> Brick3: gluster2:/gluster/vms/brick (arbiter)
> Options Reconfigured:
> cluster.granular-entry-heal: enable
> performance.strict-o-direct: on
> network.ping-timeout: 30
> storage.owner-gid: 36
> storage.owner-uid: 36
> user.cifs: off
> features.shard: on
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> nfs.export-volumes: on
> 
> 
> Thanx,
> Alex
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list