[Gluster-users] corruption using gluster and iSCSI with LIO
Lindsay Mathieson
lindsay.mathieson at gmail.com
Thu Nov 17 22:29:49 UTC 2016
On 18/11/2016 8:17 AM, Olivier Lambert wrote:
> gluster volume info gv0
>
> Volume Name: gv0
> Type: Replicate
> Volume ID: 2f8658ed-0d9d-4a6f-a00b-96e9d3470b53
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.0.0.1:/bricks/brick1/gv0
> Brick2: 10.0.0.2:/bricks/brick1/gv0
> Options Reconfigured:
> nfs.disable: on
> performance.readdir-ahead: on
> transport.address-family: inet
> features.shard: on
> features.shard-block-size: 16MB
When hosting VM's its essential to set these options:
network.remote-dio: enable
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.stat-prefetch: on
performance.strict-write-ordering: off
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.data-self-heal: on
Also with replica two and quorum on (required) your volume will become
read-only when one node goes down to prevent the possibility of
split-brain - you *really* want to avoid that :)
I'd recommend a replica 3 volume, that way 1 node can go down, but the
other two still form a quorum and will remain r/w.
If the extra disks are not possible, then a Arbiter volume can be setup
- basically dummy files on the third node.
--
Lindsay Mathieson
More information about the Gluster-users
mailing list