[Gluster-users] 2 Node glusterfs quorum help

ML mail mlnospam at yahoo.com
Mon Feb 9 08:23:56 UTC 2015


This seems to be a workaround, isn't there another proper way with the configuration of the volume to achieve this? I would not like to have to setup a third fake server just in order to avoid that.


     On Monday, February 9, 2015 2:27 AM, Kaamesh Kamalaaharan <kaamesh at novocraft.com> wrote:
   

 It works! Thanks to craig's suggestion . i setup a third server without a brick and added it to the trusted pool. now it doesnt go down. thanks alot guys!
Thank You Kindly,KaameshBioinformaticianNovocraft Technologies Sdn BhdC-23A-05, 3 Two Square, Section 19, 46300 Petaling JayaSelangor Darul EhsanMalaysiaMobile: +60176562635Ph: +60379600541Fax: +60379600540
On Mon, Feb 9, 2015 at 2:19 AM, <prmarino1 at gmail.com> wrote:

Quorum only appli‎es when you have 3 or more bricks replicating each other. In other words it doesn't mean any thing in a 2 node 2 brick cluster so it shouldn't be set.
In other words based on your settings it's acting correctly because it thinks that the online brick needs to have a minimum of one other brick it agrees with online. 
 Sent from my BlackBerry 10 smartphone.  
|   From: Kaamesh KamalaaharanSent: Sunday, February 8, 2015 05:50To: gluster-users at gluster.orgSubject: [Gluster-users] 2 Node glusterfs quorum help |


Hi guys. I have a 2 node replicated gluster  setup with the quorum count set at 1 brick. By my understanding this means that the gluster will not  go down when one brick is disconnected. This however proves false and when one brick is disconnected (i just pulled it off the network) the remaining brick goes down as well and i lose my mount points on the server. can anyone shed some light on whats wrong? 
my gfs config options are as following

Volume Name: gfsvolumeType: ReplicateVolume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059bStatus: StartedNumber of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: gfs1:/export/sda/brickBrick2: gfs2:/export/sda/brickOptions Reconfigured:cluster.quorum-count: 1auth.allow: 172.*cluster.quorum-type: fixedperformance.cache-size: 1914589184performance.cache-refresh-timeout: 60cluster.data-self-heal-algorithm: diffperformance.write-behind-window-size: 4MBnfs.trusted-write: offnfs.addr-namelookup: offcluster.server-quorum-type: serverperformance.cache-max-file-size: 2MBnetwork.frame-timeout: 90network.ping-timeout: 30performance.quick-read: offcluster.server-quorum-ratio: 50%

Thank You Kindly,Kaamesh


_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150209/9fb7e04a/attachment.html>


More information about the Gluster-users mailing list