[Gluster-users] gluster quorum settings
Gambit15
dougti+gluster at gmail.com
Thu Feb 9 23:14:12 UTC 2017
Hi Bap,
On 6 February 2017 at 07:27, pasawwa <pasawwa at gmail.com> wrote:
> Hello,
>
> we just created 3 node gluster ( replica 3 arbiter 1 ) and get "systemctl
> status glusterd" message:
>
> n1.test.net etc-glusterfs-glusterd.vol[1458]: [2017-02-03
> 17:56:24.691334] C [MSGID: 106003] [glusterd-server-quorum.c:341:
> glusterd_do_volume_quorum_action] 0-management: Server quorum regained
> for volume TESTp1. Starting local bricks.
>
> How can we setup gluster quorum params to eliminate this warning and *to
> aviod split brain **and writ**ea**ble* if any single node goes down ?
>
> current settings:
>
> server.event-threads: 8
> client.event-threads: 8
> performance.io-thread-count: 20
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> *cluster.quorum-type: auto * # we are not shure to be 100%
> successfull for split brain ( update nodes eg. )
> *cluster.server-quorum-type: server *# it looks to be OK
> features.shard: on
> cluster.data-self-heal-algorithm: diff
> storage.owner-uid: 36
> storage.owner-gid: 36
> server.allow-insecure: on
> network.ping-timeout: 10
>
For a rep 3 setup, those default quorum configurations should allow you to
maintain writes & avoid split-brain should any single node fails.
To automate the healing process, I'd also add these to the list:
cluster.entry-self-heal: on
cluster.metadata-self-heal: on
cluster.data-self-heal: on
>
> https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/arbiter-volumes-and-quorum/
>
> regrads
> Bap.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170209/dab3da15/attachment.html>
More information about the Gluster-users
mailing list