[Gluster-users] Settings for VM hosting
nl at fischer-ka.de
nl at fischer-ka.de
Thu Apr 18 13:44:28 UTC 2019
I have setup my storage for my nodes (also replica 3, but distributed
replicated volume with some more nodes) just some weeks ago based on the
"virt group" as recommended ... and here is mine:
I only changed the data-self-heal-algorithm because CPU is not limiting
that much on my nodes so I chose that over bandwith (based on my
understanding of the docs).
I have some more nodes, so sharding will better distribute the data
between my nodes
Am 18.04.19 um 15:13 schrieb Martin Toth:
> I am curious about your setup and settings also. I have exactly same setup and use case.
> - why do you use sharding on replica3? Do you have various size of bricks(disks) pre node?
> Wonder if someone will share settings for this setup.
>> On 18 Apr 2019, at 09:27, lemonnierk at ulrar.net wrote:
>> We've been using the same settings, found in an old email here, since
>> v3.7 of gluster for our VM hosting volumes. They've been working fine
>> but since we've just installed a v6 for testing I figured there might
>> be new settings I should be aware of.
>> So for access through the libgfapi (qemu), for VM hard drives, is that
>> still optimal and recommended ?
>> Volume Name: glusterfs
>> Type: Replicate
>> Volume ID: b28347ff-2c27-44e0-bc7d-c1c017df7cd1
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Brick1: ips1adm.X:/mnt/glusterfs/brick
>> Brick2: ips2adm.X:/mnt/glusterfs/brick
>> Brick3: ips3adm.X:/mnt/glusterfs/brick
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> features.shard: on
>> features.shard-block-size: 64MB
>> cluster.data-self-heal-algorithm: full
>> network.ping-timeout: 30
>> diagnostics.count-fop-hits: on
>> diagnostics.latency-measurement: on
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>> Thanks !
>> Gluster-users mailing list
>> Gluster-users at gluster.org
> Gluster-users mailing list
> Gluster-users at gluster.org
More information about the Gluster-users