[Gluster-users] Settings for VM hosting
kdhananj at redhat.com
Fri Apr 19 01:17:49 UTC 2019
Looks good mostly.
You can also turn on performance.stat-prefetch, and also set
client.event-threads and server.event-threads to 4.
And if your bricks are on ssds, then you could also enable
And if your bricks and hypervisors are on same set of machines
then you can turn off cluster.choose-local and see if it helps read
Do let us know what helped and what didn't.
On Thu, Apr 18, 2019 at 1:05 PM <lemonnierk at ulrar.net> wrote:
> We've been using the same settings, found in an old email here, since
> v3.7 of gluster for our VM hosting volumes. They've been working fine
> but since we've just installed a v6 for testing I figured there might
> be new settings I should be aware of.
> So for access through the libgfapi (qemu), for VM hard drives, is that
> still optimal and recommended ?
> Volume Name: glusterfs
> Type: Replicate
> Volume ID: b28347ff-2c27-44e0-bc7d-c1c017df7cd1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Brick1: ips1adm.X:/mnt/glusterfs/brick
> Brick2: ips2adm.X:/mnt/glusterfs/brick
> Brick3: ips3adm.X:/mnt/glusterfs/brick
> Options Reconfigured:
> performance.readdir-ahead: on
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> features.shard: on
> features.shard-block-size: 64MB
> cluster.data-self-heal-algorithm: full
> network.ping-timeout: 30
> diagnostics.count-fop-hits: on
> diagnostics.latency-measurement: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> Thanks !
> Gluster-users mailing list
> Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users