[Gluster-users] Glusterfs readonly Issue

Nag Pavan Chilakam nchilaka at redhat.com
Tue Nov 15 15:55:46 UTC 2016


Hi Atul,
In Short: it is due to client side quorum behavior
Detailed info:
I see that there are 3 nodes in the cluster ie master1, master2, compute01
However the volume is being hosted only on master1 and master2. 
Also,  see that you have enabled server side quorum, and client side quorum from vol info
cluster.quorum-type: auto  =================>client side quorum options
cluster.server-quorum-type: server  =================>server side quorum options
cluster.server-quorum-ratio: 51%  =================>server side quorum options
Given that you are using compute01 (which is more of a dummy node in this case), hence even though one node is down, still server side quorum is maintained

Client side quorum means, that the nodes are reachable by the client(IO path). 
If set to "auto", this option allows writes to the file only if number of bricks that are up >= ceil (of the total number of bricks that constitute that replica/2). If the number of replicas is even, then there is a further check: If the number of up bricks is exactly equal to n/2, then the first brick must be one of the bricks that is up. If it is more than n/2 then it is not necessary that the first brick is one of the up bricks.
In x2 case, the first brick of a replica pair must be always up for data to be written from client.

Hence when you bring down node1 you get readonly, but when you bring down node2 you can still write to the volume.


----- Original Message -----
From: "Atul Yadav" <atulyadavtech at gmail.com>
To: "Atin Mukherjee" <amukherj at redhat.com>, "gluster-users" <gluster-users at gluster.org>
Sent: Monday, 14 November, 2016 8:04:24 PM
Subject: [Gluster-users] Glusterfs readonly Issue

Dear Team, 

In the event of the failure of master1, master 2 glusterfs home directory will become read only fs. 

If we manually shutdown the master 2, then there is no impact on the file system and all io operation will complete with out any problem. 

can you please provide some guidance to isolate the problem. 



# gluster peer status 
Number of Peers: 2 

Hostname: master1-ib.dbt.au 
Uuid: a5608d66-a3c6-450e-a239-108668083ff2 
State: Peer in Cluster (Connected) 

Hostname: compute01-ib.dbt.au 
Uuid: d2c47fc2-f673-4790-b368-d214a58c59f4 
State: Peer in Cluster (Connected) 



# gluster vol info home 

Volume Name: home 
Type: Replicate 
Volume ID: 2403ddf9-c2e0-4930-bc94-734772ef099f 
Status: Started 
Number of Bricks: 1 x 2 = 2 
Transport-type: tcp,rdma 
Bricks: 
Brick1: master1-ib.dbt.au:/glusterfs/home/brick1 
Brick2: master2-ib.dbt.au:/glusterfs/home/brick2 
Options Reconfigured: 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.stat-prefetch: off 
network.remote-dio: enable 
cluster.quorum-type: auto  
nfs.disable: on 
performance.readdir-ahead: on 
cluster.server-quorum-type: server 
config.transport: tcp,rdma 
network.ping-timeout: 10 
cluster.server-quorum-ratio: 51% 
cluster.enable-shared-storage: disable 



# gluster vol heal home info 
Brick master1-ib.dbt.au:/glusterfs/home/brick1 
Status: Connected 
Number of entries: 0 

Brick master2-ib.dbt.au:/glusterfs/home/brick2 
Status: Connected 
Number of entries: 0 


# gluster vol heal home info heal-failed 
Gathering list of heal failed entries on volume home has been unsuccessful on bricks that are down. Please check if all brick processes are running[root at master2 


Thank You 
Atul Yadav 

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list