<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Apr 11, 2018 at 4:35 AM, TomK <span dir="ltr"><<a href="mailto:tomkcpr@mdevsys.com" target="_blank">tomkcpr@mdevsys.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 4/9/2018 2:45 AM, Alex K wrote:<br>
Hey Alex,<br>
<br>
With two nodes, the setup works but both sides go down when one node is missing. Still I set the below two params to none and that solved my issue:<br>
<br>
cluster.quorum-type: none<br>
cluster.server-quorum-type: none<br>
<br></blockquote><div>yes this disables quorum so as to avoid the issue. Glad that this helped. Bare in in mind though that it is easier to face split-brain issues with quorum is disabled, that's why 3 nodes at least are recommended. Just to note that I have also a 2 node cluster which is running without issues for long time. <br> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thank you for that.<br>
<br>
Cheers,<br>
Tom<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
Hi,<br>
<br>
You need 3 nodes at least to have quorum enabled. In 2 node setup you need to disable quorum so as to be able to still use the volume when one of the nodes go down.<br>
<br></span><div><div class="h5">
On Mon, Apr 9, 2018, 09:02 TomK <<a href="mailto:tomkcpr@mdevsys.com" target="_blank">tomkcpr@mdevsys.com</a> <mailto:<a href="mailto:tomkcpr@mdevsys.com" target="_blank">tomkcpr@mdevsys.com</a>>> wrote:<br>
<br>
Hey All,<br>
<br>
In a two node glusterfs setup, with one node down, can't use the second<br>
node to mount the volume. I understand this is expected behaviour?<br>
Anyway to allow the secondary node to function then replicate what<br>
changed to the first (primary) when it's back online? Or should I just<br>
go for a third node to allow for this?<br>
<br>
Also, how safe is it to set the following to none?<br>
<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
<br>
<br>
[root@nfs01 /]# gluster volume start gv01<br>
volume start: gv01: failed: Quorum not met. Volume operation not<br>
allowed.<br>
[root@nfs01 /]#<br>
<br>
<br>
[root@nfs01 /]# gluster volume status<br>
Status of volume: gv01<br>
Gluster process TCP Port RDMA Port Online Pid<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick nfs01:/bricks/0/gv01 N/A N/A N N/A<br>
Self-heal Daemon on localhost N/A N/A Y<br>
25561<br>
<br>
Task Status of Volume gv01<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
There are no active volume tasks<br>
<br>
[root@nfs01 /]#<br>
<br>
<br>
[root@nfs01 /]# gluster volume info<br>
<br>
Volume Name: gv01<br>
Type: Replicate<br>
Volume ID: e5ccc75e-5192-45ac-b410-a34ebd<wbr>777666<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: nfs01:/bricks/0/gv01<br>
Brick2: nfs02:/bricks/0/gv01<br>
Options Reconfigured:<br>
transport.address-family: inet<br>
nfs.disable: on<br>
performance.client-io-threads: off<br>
nfs.trusted-sync: on<br>
performance.cache-size: 1GB<br>
performance.io-thread-count: 16<br>
performance.write-behind-windo<wbr>w-size: 8MB<br>
performance.readdir-ahead: on<br>
client.event-threads: 8<br>
server.event-threads: 8<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
[root@nfs01 /]#<br>
<br>
<br>
<br>
<br>
==> n.log <==<br>
[2018-04-09 05:08:13.704156] I [MSGID: 100030] [glusterfsd.c:2556:main]<br>
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version<br>
3.13.2 (args: /usr/sbin/glusterfs --process-name fuse<br>
--volfile-server=nfs01 --volfile-id=/gv01 /n)<br>
[2018-04-09 05:08:13.711255] W [MSGID: 101002]<br>
[options.c:995:xl_opt_validate<wbr>] 0-glusterfs: option 'address-family' is<br>
deprecated, preferred is 'transport.address-family', continuing with<br>
correction<br>
[2018-04-09 05:08:13.728297] W [socket.c:3216:socket_connect]<br>
0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Protocol not<br>
available"<br>
[2018-04-09 05:08:13.729025] I [MSGID: 101190]<br>
[event-epoll.c:613:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started thread<br>
with index 1<br>
[2018-04-09 05:08:13.737757] I [MSGID: 101190]<br>
[event-epoll.c:613:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started thread<br>
with index 2<br>
[2018-04-09 05:08:13.738114] I [MSGID: 101190]<br>
[event-epoll.c:613:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started thread<br>
with index 3<br>
[2018-04-09 05:08:13.738203] I [MSGID: 101190]<br>
[event-epoll.c:613:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started thread<br>
with index 4<br>
[2018-04-09 05:08:13.738324] I [MSGID: 101190]<br>
[event-epoll.c:613:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started thread<br>
with index 5<br>
[2018-04-09 05:08:13.738330] I [MSGID: 101190]<br>
[event-epoll.c:613:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started thread<br>
with index 6<br>
[2018-04-09 05:08:13.738655] I [MSGID: 101190]<br>
[event-epoll.c:613:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started thread<br>
with index 7<br>
[2018-04-09 05:08:13.738742] I [MSGID: 101190]<br>
[event-epoll.c:613:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started thread<br>
with index 8<br>
[2018-04-09 05:08:13.739460] W [MSGID: 101174]<br>
[graph.c:363:_log_if_unknown_o<wbr>ption] 0-gv01-readdir-ahead: option<br>
'parallel-readdir' is not recognized<br>
[2018-04-09 05:08:13.739787] I [MSGID: 114020] [client.c:2360:notify]<br>
0-gv01-client-0: parent translators are ready, attempting connect on<br>
transport<br>
[2018-04-09 05:08:13.747040] W [socket.c:3216:socket_connect]<br>
0-gv01-client-0: Error disabling sockopt IPV6_V6ONLY: "Protocol not<br>
available"<br>
[2018-04-09 05:08:13.747372] I [MSGID: 114020] [client.c:2360:notify]<br>
0-gv01-client-1: parent translators are ready, attempting connect on<br>
transport<br>
[2018-04-09 05:08:13.747883] E [MSGID: 114058]<br>
[client-handshake.c:1571:clien<wbr>t_query_portmap_cbk] 0-gv01-client-0:<br>
failed to get the port number for remote subvolume. Please run 'gluster<br>
volume status' on server to see if brick process is running.<br>
[2018-04-09 05:08:13.748026] I [MSGID: 114018]<br>
[client.c:2285:client_rpc_noti<wbr>fy] 0-gv01-client-0: disconnected from<br>
gv01-client-0. Client process will keep trying to connect to glusterd<br>
until brick's port is available<br>
[2018-04-09 05:08:13.748070] W [MSGID: 108001]<br>
[afr-common.c:5391:afr_notify] 0-gv01-replicate-0: Client-quorum is<br>
not met<br>
[2018-04-09 05:08:13.754493] W [socket.c:3216:socket_connect]<br>
0-gv01-client-1: Error disabling sockopt IPV6_V6ONLY: "Protocol not<br>
available"<br>
Final graph:<br>
+-----------------------------<wbr>------------------------------<wbr>-------------------+<br>
1: volume gv01-client-0<br>
2: type protocol/client<br>
3: option ping-timeout 42<br>
4: option remote-host nfs01<br>
5: option remote-subvolume /bricks/0/gv01<br>
6: option transport-type socket<br>
7: option transport.address-family inet<br>
8: option username 916ccf06-dc1d-467f-bc3d-f00a74<wbr>49618f<br>
9: option password a44739e0-9587-411f-8e6a-9a6a4e<wbr>46156c<br>
10: option event-threads 8<br>
11: option transport.tcp-user-timeout 0<br>
12: option transport.socket.keepalive-tim<wbr>e 20<br>
13: option transport.socket.keepalive-int<wbr>erval 2<br>
14: option transport.socket.keepalive-cou<wbr>nt 9<br>
15: option send-gids true<br>
16: end-volume<br>
17:<br>
18: volume gv01-client-1<br>
19: type protocol/client<br>
20: option ping-timeout 42<br>
21: option remote-host nfs02<br>
22: option remote-subvolume /bricks/0/gv01<br>
23: option transport-type socket<br>
24: option transport.address-family inet<br>
25: option username 916ccf06-dc1d-467f-bc3d-f00a74<wbr>49618f<br>
26: option password a44739e0-9587-411f-8e6a-9a6a4e<wbr>46156c<br>
27: option event-threads 8<br>
28: option transport.tcp-user-timeout 0<br>
29: option transport.socket.keepalive-tim<wbr>e 20<br>
30: option transport.socket.keepalive-int<wbr>erval 2<br>
31: option transport.socket.keepalive-cou<wbr>nt 9<br>
32: option send-gids true<br>
33: end-volume<br>
34:<br>
35: volume gv01-replicate-0<br>
36: type cluster/replicate<br>
37: option afr-pending-xattr gv01-client-0,gv01-client-1<br>
38: option quorum-type auto<br>
39: option use-compound-fops off<br>
40: subvolumes gv01-client-0 gv01-client-1<br>
41: end-volume<br>
42:<br>
43: volume gv01-dht<br>
44: type cluster/distribute<br>
45: option lock-migration off<br>
46: subvolumes gv01-replicate-0<br>
47: end-volume<br>
48:<br>
49: volume gv01-write-behind<br>
50: type performance/write-behind<br>
51: option cache-size 8MB<br>
52: subvolumes gv01-dht<br>
53: end-volume<br>
54:<br>
55: volume gv01-read-ahead<br>
56: type performance/read-ahead<br>
57: subvolumes gv01-write-behind<br>
58: end-volume<br>
59:<br>
60: volume gv01-readdir-ahead<br>
61: type performance/readdir-ahead<br>
62: option parallel-readdir off<br>
63: option rda-request-size 131072<br>
64: option rda-cache-limit 10MB<br>
65: subvolumes gv01-read-ahead<br>
66: end-volume<br>
67:<br>
68: volume gv01-io-cache<br>
69: type performance/io-cache<br>
70: option cache-size 1GB<br>
71: subvolumes gv01-readdir-ahead<br>
72: end-volume<br>
73:<br>
74: volume gv01-quick-read<br>
75: type performance/quick-read<br>
76: option cache-size 1GB<br>
77: subvolumes gv01-io-cache<br>
78: end-volume<br>
79:<br>
80: volume gv01-open-behind<br>
81: type performance/open-behind<br>
82: subvolumes gv01-quick-read<br>
83: end-volume<br>
84:<br>
85: volume gv01-md-cache<br>
86: type performance/md-cache<br>
87: subvolumes gv01-open-behind<br>
88: end-volume<br>
89:<br>
90: volume gv01<br>
91: type debug/io-stats<br>
92: option log-level INFO<br>
93: option latency-measurement off<br>
94: option count-fop-hits off<br>
95: subvolumes gv01-md-cache<br>
96: end-volume<br>
97:<br>
98: volume meta-autoload<br>
99: type meta<br>
100: subvolumes gv01<br>
101: end-volume<br>
102:<br>
+-----------------------------<wbr>------------------------------<wbr>-------------------+<br>
[2018-04-09 05:08:13.922631] E [socket.c:2374:socket_connect_<wbr>finish]<br>
0-gv01-client-1: connection to <a href="http://192.168.0.119:24007" rel="noreferrer" target="_blank">192.168.0.119:24007</a><br></div></div>
<<a href="http://192.168.0.119:24007" rel="noreferrer" target="_blank">http://192.168.0.119:24007</a>> failed (No route to<div><div class="h5"><br>
host); disconnecting socket<br>
[2018-04-09 05:08:13.922690] E [MSGID: 108006]<br>
[afr-common.c:5164:__afr_handl<wbr>e_child_down_event] 0-gv01-replicate-0:<br>
All subvolumes are down. Going offline until atleast one of them comes<br>
back up.<br>
[2018-04-09 05:08:13.926201] I [fuse-bridge.c:4205:fuse_init]<br>
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24<br>
kernel 7.22<br>
[2018-04-09 05:08:13.926245] I [fuse-bridge.c:4835:fuse_graph<wbr>_sync]<br>
0-fuse: switched to graph 0<br>
[2018-04-09 05:08:13.926518] I [MSGID: 108006]<br>
[afr-common.c:5444:afr_local_i<wbr>nit] 0-gv01-replicate-0: no subvolumes up<br>
[2018-04-09 05:08:13.926671] E [MSGID: 101046]<br>
[dht-common.c:1501:dht_lookup_<wbr>dir_cbk] 0-gv01-dht: dict is null<br>
[2018-04-09 05:08:13.926762] E [fuse-bridge.c:4271:fuse_first<wbr>_lookup]<br>
0-fuse: first lookup on root failed (Transport endpoint is not<br>
connected)<br>
[2018-04-09 05:08:13.927207] I [MSGID: 108006]<br>
[afr-common.c:5444:afr_local_i<wbr>nit] 0-gv01-replicate-0: no subvolumes up<br>
[2018-04-09 05:08:13.927262] E [MSGID: 101046]<br>
[dht-common.c:1501:dht_lookup_<wbr>dir_cbk] 0-gv01-dht: dict is null<br>
[2018-04-09 05:08:13.927301] W<br>
[fuse-resolve.c:132:fuse_resol<wbr>ve_gfid_cbk] 0-fuse:<br>
00000000-0000-0000-0000-000000<wbr>000001: failed to resolve (Transport<br>
endpoint is not connected)<br>
[2018-04-09 05:08:13.927339] E [fuse-bridge.c:900:fuse_getatt<wbr>r_resume]<br>
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-00000<wbr>0000001)<br>
resolution failed<br>
[2018-04-09 05:08:13.931497] I [MSGID: 108006]<br>
[afr-common.c:5444:afr_local_i<wbr>nit] 0-gv01-replicate-0: no subvolumes up<br>
[2018-04-09 05:08:13.931558] E [MSGID: 101046]<br>
[dht-common.c:1501:dht_lookup_<wbr>dir_cbk] 0-gv01-dht: dict is null<br>
[2018-04-09 05:08:13.931599] W<br>
[fuse-resolve.c:132:fuse_resol<wbr>ve_gfid_cbk] 0-fuse:<br>
00000000-0000-0000-0000-000000<wbr>000001: failed to resolve (Transport<br>
endpoint is not connected)<br>
[2018-04-09 05:08:13.931623] E [fuse-bridge.c:900:fuse_getatt<wbr>r_resume]<br>
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-00000<wbr>0000001)<br>
resolution failed<br>
[2018-04-09 05:08:13.937258] I [fuse-bridge.c:5093:fuse_threa<wbr>d_proc]<br>
0-fuse: initating unmount of /n<br>
[2018-04-09 05:08:13.938043] W [glusterfsd.c:1393:cleanup_and<wbr>_exit]<br>
(-->/lib64/libpthread.so.0(+0x<wbr>7e25) [0x7fb80b05ae25]<br>
-->/usr/sbin/glusterfs(gluster<wbr>fs_sigwaiter+0xe5) [0x560b52471675]<br>
-->/usr/sbin/glusterfs(cleanup<wbr>_and_exit+0x6b) [0x560b5247149b] ) 0-:<br>
received signum (15), shutting down<br>
[2018-04-09 05:08:13.938086] I [fuse-bridge.c:5855:fini] 0-fuse:<br>
Unmounting '/n'.<br>
[2018-04-09 05:08:13.938106] I [fuse-bridge.c:5860:fini] 0-fuse: Closing<br>
fuse connection to '/n'.<br>
<br>
==> glusterd.log <==<br>
[2018-04-09 05:08:15.118078] W [socket.c:3216:socket_connect]<br>
0-management: Error disabling sockopt IPV6_V6ONLY: "Protocol not<br>
available"<br>
<br>
==> glustershd.log <==<br>
[2018-04-09 05:08:15.282192] W [socket.c:3216:socket_connect]<br>
0-gv01-client-0: Error disabling sockopt IPV6_V6ONLY: "Protocol not<br>
available"<br>
[2018-04-09 05:08:15.289508] W [socket.c:3216:socket_connect]<br>
0-gv01-client-1: Error disabling sockopt IPV6_V6ONLY: "Protocol not<br>
available"<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
--<br>
Cheers,<br>
Tom K.<br>
------------------------------<wbr>------------------------------<wbr>-------------------------<br>
<br>
Living on earth is expensive, but it includes a free trip around the<br>
sun.<br>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br></div></div>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
<br>
</blockquote><div class="HOEnZb"><div class="h5">
<br>
<br>
-- <br>
Cheers,<br>
Tom K.<br>
------------------------------<wbr>------------------------------<wbr>-------------------------<br>
<br>
Living on earth is expensive, but it includes a free trip around the sun.<br>
<br>
</div></div></blockquote></div><br></div></div>