[Gluster-users] 2 Node glusterfs quorum help

Kaamesh Kamalaaharan kaamesh at novocraft.com
Mon Feb 9 01:27:01 UTC 2015


It works! Thanks to craig's suggestion . i setup a third server without a
brick and added it to the trusted pool. now it doesnt go down. thanks alot
guys!

Thank You Kindly,
Kaamesh
Bioinformatician
Novocraft Technologies Sdn Bhd
C-23A-05, 3 Two Square, Section 19, 46300 Petaling Jaya
Selangor Darul Ehsan
Malaysia
Mobile: +60176562635
Ph: +60379600541
Fax: +60379600540

On Mon, Feb 9, 2015 at 2:19 AM, <prmarino1 at gmail.com> wrote:

> Quorum only appli‎es when you have 3 or more bricks replicating each
> other. In other words it doesn't mean any thing in a 2 node 2 brick cluster
> so it shouldn't be set.
>
> In other words based on your settings it's acting correctly because it
> thinks that the online brick needs to have a minimum of one other brick it
> agrees with online.
>
> Sent from my BlackBerry 10 smartphone.
>   *From: *Kaamesh Kamalaaharan
> *Sent: *Sunday, February 8, 2015 05:50
> *To: *gluster-users at gluster.org
> *Subject: *[Gluster-users] 2 Node glusterfs quorum help
>
> Hi guys. I have a 2 node replicated gluster  setup with the quorum count
> set at 1 brick. By my understanding this means that the gluster will not
>  go down when one brick is disconnected. This however proves false and when
> one brick is disconnected (i just pulled it off the network) the remaining
> brick goes down as well and i lose my mount points on the server.
> can anyone shed some light on whats wrong?
>
> my gfs config options are as following
>
>
> Volume Name: gfsvolume
> Type: Replicate
> Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gfs1:/export/sda/brick
> Brick2: gfs2:/export/sda/brick
> Options Reconfigured:
> cluster.quorum-count: 1
> auth.allow: 172.*
> cluster.quorum-type: fixed
> performance.cache-size: 1914589184
> performance.cache-refresh-timeout: 60
> cluster.data-self-heal-algorithm: diff
> performance.write-behind-window-size: 4MB
> nfs.trusted-write: off
> nfs.addr-namelookup: off
> cluster.server-quorum-type: server
> performance.cache-max-file-size: 2MB
> network.frame-timeout: 90
> network.ping-timeout: 30
> performance.quick-read: off
> cluster.server-quorum-ratio: 50%
>
>
> Thank You Kindly,
> Kaamesh
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150209/16c38026/attachment.html>


More information about the Gluster-users mailing list