[Gluster-users] 2 Node glusterfs quorum help
craigyk at gmail.com
Sun Feb 8 16:39:56 UTC 2015
I added a third server to the cluster to serve as a tie breaker. This worked.
The third server does not actually contribute any bricks to any volumes.
> On Feb 8, 2015, at 2:50 AM, Kaamesh Kamalaaharan <kaamesh at novocraft.com> wrote:
> Hi guys. I have a 2 node replicated gluster setup with the quorum count set at 1 brick. By my understanding this means that the gluster will not go down when one brick is disconnected. This however proves false and when one brick is disconnected (i just pulled it off the network) the remaining brick goes down as well and i lose my mount points on the server.
> can anyone shed some light on whats wrong?
> my gfs config options are as following
> Volume Name: gfsvolume
> Type: Replicate
> Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Brick1: gfs1:/export/sda/brick
> Brick2: gfs2:/export/sda/brick
> Options Reconfigured:
> cluster.quorum-count: 1
> auth.allow: 172.*
> cluster.quorum-type: fixed
> performance.cache-size: 1914589184
> performance.cache-refresh-timeout: 60
> cluster.data-self-heal-algorithm: diff
> performance.write-behind-window-size: 4MB
> nfs.trusted-write: off
> nfs.addr-namelookup: off
> cluster.server-quorum-type: server
> performance.cache-max-file-size: 2MB
> network.frame-timeout: 90
> network.ping-timeout: 30
> performance.quick-read: off
> cluster.server-quorum-ratio: 50%
> Thank You Kindly,
> Gluster-users mailing list
> Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users