<div dir="ltr"><div dir="ltr"><div>Hello Gluster Community,</div><div><br></div><div>today I got the same error messages in glusterd.log when setting volume options of a freshly created volume. See the log entry:</div><div><br></div><div>[2019-01-30 10:15:55.597268] I [run.c:242:runner_log] (-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xdad2a) [0x7f08ce71ed2a] -->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xda81c) [0x7f08ce71e81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105) [0x7f08d4bd0575] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=integration-archive1 -o cluster.lookup-optimize=on --gd-workdir=/var/lib/glusterd<br><b>[2019-01-30 10:15:55.806303] W [socket.c:719:__socket_rwv] 0-management: readv on <a href="http://10.10.12.102:24007">10.10.12.102:24007</a> failed (Input/output error)</b><br><b>[2019-01-30 10:15:55.806344] E [socket.c:246:ssl_dump_error_stack] 0-management: error:140943F2:SSL routines:ssl3_read_bytes:sslv3 alert unexpected messag</b>e<br>The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler" repeated 51 times between [2019-01-30 10:15:51.659656] and [2019-01-30 10:15:55.635151]<br>[2019-01-30 10:15:55.806370] I [MSGID: 106004] [glusterd-handler.c:6430:__glusterd_peer_rpc_notify] 0-management: Peer <fs-lrunning-c2-n2> (<ccd0137f-07d8-4e26-a168-b77af79a36af>), in state <Peer in Cluster>, has disconnected from glusterd.<br>[2019-01-30 10:15:55.806487] W [glusterd-locks.c:795:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349) [0x7f08ce668349] -->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950) [0x7f08ce671950] -->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0239) [0x7f08ce724239] ) 0-management: Lock for vol archive1 not held<br>[2019-01-30 10:15:55.806505] W [MSGID: 106117] [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not released for archive1<br>[2019-01-30 10:15:55.806522] W [glusterd-locks.c:795:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349) [0x7f08ce668349] -->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950) [0x7f08ce671950] -->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0239) [0x7f08ce724239] ) 0-management: Lock for vol archive2 not held<br>[2019-01-30 10:15:55.806529] W [MSGID: 106117] [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not released for archive2<br>[2019-01-30 10:15:55.806543] W [glusterd-locks.c:795:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349) [0x7f08ce668349] -->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950) [0x7f08ce671950] -->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0239) [0x7f08ce724239] ) 0-management: Lock for vol gluster_shared_storage not held<br>[2019-01-30 10:15:55.806553] W [MSGID: 106117] [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage<br>[2019-01-30 10:15:55.806576] W [glusterd-locks.c:806:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349) [0x7f08ce668349] -->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950) [0x7f08ce671950] -->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0074) [0x7f08ce724074] ) 0-management: Lock owner mismatch. Lock for vol integration-archive1 held by 451b6e04-5098-4a35-a312-edbb0d8328a0<br>[2019-01-30 10:15:55.806584] W [MSGID: 106117] [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not released for integration-archive1<br>[2019-01-30 10:15:55.806846] E [rpc-clnt.c:346:saved_frames_unwind] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x17d)[0x7f08d4b8122d] (--> /usr/lib64/libgfrpc.so.0(+0xca3d)[0x7f08d4948a3d] (--> /usr/lib64/libgfrpc.so.0(+0xcb5e)[0x7f08d4948b5e] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x8b)[0x7f08d494a0bb] (--> /usr/lib64/libgfrpc.so.0(+0xec68)[0x7f08d494ac68] ))))) 0-management: forced unwinding frame type(glusterd mgmt v3) op(--(1)) called at 2019-01-30 10:15:55.804680 (xid=0x1ae)<br>[2019-01-30 10:15:55.806865] E [MSGID: 106115] [glusterd-mgmt.c:116:gd_mgmt_v3_collate_errors] 0-management: Locking failed on fs-lrunning-c2-n2. Please check log file for details.<br>[2019-01-30 10:15:55.806914] E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler<br>[2019-01-30 10:15:55.806898] E [MSGID: 106150] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Locking Peers Failed.<br>The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler" repeated 4 times between [2019-01-30 10:15:55.806914] and [2019-01-30 10:15:56.322122]<br>[2019-01-30 10:15:56.322287] E [MSGID: 106529] [glusterd-volume-ops.c:1916:glusterd_op_stage_delete_volume] 0-management: Some of the peers are down<br>[2019-01-30 10:15:56.322319] E [MSGID: 106301] [glusterd-syncop.c:1308:gd_stage_op_phase] 0-management: Staging of operation 'Volume Delete' failed on localhost : Some of the peers are down<br></div><div><br></div><div>Again my peer "fs-lrunning-c2-n2" is not connected and again there is a ssl error message. @Milind Changire Any idea if this ssl error has an relation to the peer disconnect problem? Or is there any problem with the Portmapping in Glusterv5.x?</div><div><br></div><div>Regards</div><div>David Spisla<br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Am Do., 17. Jan. 2019 um 03:42 Uhr schrieb Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Jan 16, 2019 at 9:48 PM David Spisla <<a href="mailto:spisla80@gmail.com" target="_blank">spisla80@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Dear Gluster Community,</div><div><br></div><div>i created a replica 4 volume from gluster-node1 on a 4-Node Cluster with SSL/TLS network encryption . During setting the 'cluster.use-compound-fops' option, i got the error:</div><div><br></div><div>$<span> volume set: failed: Commit failed on gluster-node2. Please check log file for details.</span></div><div><span><br></span></div><div><span>Here is the <span>glusterd.log</span> from gluster-node1:</span></div><div><span><br></span></div><div><span><b>[2019-01-15 15:18:36.813034] I [run.c:242:runner_log] (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xdad2a) [0x7fc24d91cd2a] -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xda81c) [0x7fc24d91c81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105) [0x7fc253dce0b5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=integration-archive1 -o cluster.use-compound-fops=on --gd-workdir=/var/lib/glusterd</b><br>[2019-01-15 15:18:36.821193] I [run.c:242:runner_log] (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xdad2a) [0x7fc24d91cd2a] -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xda81c) [0x7fc24d91c81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105) [0x7fc253dce0b5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=integration-archive1 -o cluster.use-compound-fops=on --gd-workdir=/var/lib/glusterd<br>[2019-01-15 15:18:36.842383] W [socket.c:719:__socket_rwv] 0-management: readv on <a href="http://10.10.12.42:24007" target="_blank">10.10.12.42:24007</a> failed (Input/output error)<br><b>[2019-01-15 15:18:36.842415] E [socket.c:246:ssl_dump_error_stack] 0-management: error:140943F2:SSL routines:ssl3_read_bytes:sslv3 alert unexpected message</b><br>The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler" repeated 81 times between [2019-01-15 15:18:30.735508] and [2019-01-15 15:18:36.808994]<br>[2019-01-15 15:18:36.842439] I [MSGID: 106004] [glusterd-handler.c:6430:__glusterd_peer_rpc_notify] 0-management: Peer <<span>gluster-node2</span>> (<02724bb6-cb34-4ec3-8306-c2950e0acf9b>), in state <Peer in Cluster>, has disconnected from glusterd.<br></span></div></div></div></div></div></div></div></div></blockquote><div><br></div><div>The above shows there was a peer disconnect event received from gluster-node2 and this sequence might have happened while the commit operation was in-flight and hence the volume set failed on gluster-node2. Related to ssl error, I'd request Milind to comment.<br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><span>[2019-01-15 15:18:36.842638] W [glusterd-locks.c:795:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349) [0x7fc24d866349] -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950) [0x7fc24d86f950] -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239) [0x7fc24d922239] ) 0-management: Lock for vol archive1 not held<br>[2019-01-15 15:18:36.842656] W [MSGID: 106117] [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not released for archive1<br>[2019-01-15 15:18:36.842674] W [glusterd-locks.c:795:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349) [0x7fc24d866349] -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950) [0x7fc24d86f950] -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239) [0x7fc24d922239] ) 0-management: Lock for vol archive2 not held<br>[2019-01-15 15:18:36.842680] W [MSGID: 106117] [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not released for archive2<br>[2019-01-15 15:18:36.842694] W [glusterd-locks.c:795:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349) [0x7fc24d866349] -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950) [0x7fc24d86f950] -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239) [0x7fc24d922239] ) 0-management: Lock for vol gluster_shared_storage not held<br>[2019-01-15 15:18:36.842702] W [MSGID: 106117] [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage<br>[2019-01-15 15:18:36.842719] W [glusterd-locks.c:806:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349) [0x7fc24d866349] -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950) [0x7fc24d86f950] -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0074) [0x7fc24d922074] ) 0-management: Lock owner mismatch. Lock for vol integration-archive1 held by ffdaa400-82cc-4ada-8ea7-144bf3714269<br>[2019-01-15 15:18:36.842727] W [MSGID: 106117] [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not released for integration-archive1<br>[2019-01-15 15:18:36.842970] E [rpc-clnt.c:346:saved_frames_unwind] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x17d)[0x7fc253d7f18d] (--> /usr/lib64/libgfrpc.so.0(+0xca3d)[0x7fc253b46a3d] (--> /usr/lib64/libgfrpc.so.0(+0xcb5e)[0x7fc253b46b5e] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x8b)[0x7fc253b480bb] (--> /usr/lib64/libgfrpc.so.0(+0xec68)[0x7fc253b48c68] ))))) 0-management: forced unwinding frame type(glusterd mgmt) op(--(4)) called at 2019-01-15 15:18:36.802613 (xid=0x6da)<br>[2019-01-15 15:18:36.842994] E [MSGID: 106152] [glusterd-syncop.c:104:gd_collate_errors] 0-glusterd: Commit failed on gluster-node2. Please check log file for details.<br></span></div><div><span><br></span></div><div><span>And here glusterd.log from gluster-node2:</span></div><div><span><br></span></div><div><span><b>[2019-01-15 15:18:36.901788] I [run.c:242:runner_log] (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xdad2a) [0x7f9fba02cd2a] -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xda81c) [0x7f9fba02c81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105) [0x7f9fc04de0b5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=integration-archive1 -o cluster.use-compound-fops=on --gd-workdir=/var/lib/glusterd</b><br>The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler" repeated 35 times between [2019-01-15 15:18:24.832023] and [2019-01-15 15:18:47.049407]<br>[2019-01-15 15:18:47.049443] I [MSGID: 106163] [glusterd-handshake.c:1389:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 50000<br>[2019-01-15 15:18:47.053439] E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler<br>[2019-01-15 15:18:47.053479] E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler<br>[2019-01-15 15:18:47.059899] I [MSGID: 106490] [glusterd-handler.c:2586:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ffdaa400-82cc-4ada-8ea7-144bf3714269<br>[2019-01-15 15:18:47.063471] I [MSGID: 106493] [glusterd-handler.c:3843:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to fs-lrunning-c1-n1 (0), ret: 0, op_ret: 0<br>[2019-01-15 15:18:47.066148] I [MSGID: 106492] [glusterd-handler.c:2771:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ffdaa400-82cc-4ada-8ea7-144bf3714269<br>[2019-01-15 15:18:47.067264] I [MSGID: 106502] [glusterd-handler.c:2812:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend<br>[2019-01-15 15:18:47.078696] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ffdaa400-82cc-4ada-8ea7-144bf3714269<br>[2019-01-15 15:19:05.377216] E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler<br>The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler" repeated 3 times between [2019-01-15 15:19:05.377216] and [2019-01-15 15:19:06.124297]<br></span></div><div><span><br></span></div><div><span>Maybe there was only a temporarily network interruption but on the other side there is a ssl error message in the log file from gluster-node1.</span></div><div><span>Any ideas?</span></div><div><span><br></span></div><div><span>Regards</span></div><div><span>David Spisla<br></span></div><div><span><br></span></div></div></div></div></div></div></div></div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div>
</blockquote></div>