[Gluster-users] VolumeOpt Set fails of a freshly created volume

David Spisla spisla80 at gmail.com
Wed Jan 16 16:17:20 UTC 2019


Dear Gluster Community,

i created a replica 4 volume from gluster-node1 on a 4-Node Cluster with
SSL/TLS network encryption . During setting the 'cluster.use-compound-fops'
option, i got the error:

$  volume set: failed: Commit failed on gluster-node2. Please check log
file for details.

Here is the glusterd.log from gluster-node1:

*[2019-01-15 15:18:36.813034] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xdad2a)
[0x7fc24d91cd2a]
-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xda81c)
[0x7fc24d91c81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105)
[0x7fc253dce0b5] ) 0-management: Ran script:
/var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
--volname=integration-archive1 -o cluster.use-compound-fops=on
--gd-workdir=/var/lib/glusterd*
[2019-01-15 15:18:36.821193] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xdad2a)
[0x7fc24d91cd2a]
-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xda81c)
[0x7fc24d91c81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105)
[0x7fc253dce0b5] ) 0-management: Ran script:
/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
--volname=integration-archive1 -o cluster.use-compound-fops=on
--gd-workdir=/var/lib/glusterd
[2019-01-15 15:18:36.842383] W [socket.c:719:__socket_rwv] 0-management:
readv on 10.10.12.42:24007 failed (Input/output error)
*[2019-01-15 15:18:36.842415] E [socket.c:246:ssl_dump_error_stack]
0-management:   error:140943F2:SSL routines:ssl3_read_bytes:sslv3 alert
unexpected message*
The message "E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler" repeated 81 times between [2019-01-15 15:18:30.735508] and
[2019-01-15 15:18:36.808994]
[2019-01-15 15:18:36.842439] I [MSGID: 106004]
[glusterd-handler.c:6430:__glusterd_peer_rpc_notify] 0-management: Peer <
gluster-node2> (<02724bb6-cb34-4ec3-8306-c2950e0acf9b>), in state <Peer in
Cluster>, has disconnected from glusterd.
[2019-01-15 15:18:36.842638] W
[glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
[0x7fc24d866349]
-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
[0x7fc24d86f950]
-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239)
[0x7fc24d922239] ) 0-management: Lock for vol archive1 not held
[2019-01-15 15:18:36.842656] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for archive1
[2019-01-15 15:18:36.842674] W
[glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
[0x7fc24d866349]
-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
[0x7fc24d86f950]
-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239)
[0x7fc24d922239] ) 0-management: Lock for vol archive2 not held
[2019-01-15 15:18:36.842680] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for archive2
[2019-01-15 15:18:36.842694] W
[glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
[0x7fc24d866349]
-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
[0x7fc24d86f950]
-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239)
[0x7fc24d922239] ) 0-management: Lock for vol gluster_shared_storage not
held
[2019-01-15 15:18:36.842702] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for gluster_shared_storage
[2019-01-15 15:18:36.842719] W
[glusterd-locks.c:806:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
[0x7fc24d866349]
-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
[0x7fc24d86f950]
-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0074)
[0x7fc24d922074] ) 0-management: Lock owner mismatch. Lock for vol
integration-archive1 held by ffdaa400-82cc-4ada-8ea7-144bf3714269
[2019-01-15 15:18:36.842727] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for integration-archive1
[2019-01-15 15:18:36.842970] E [rpc-clnt.c:346:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x17d)[0x7fc253d7f18d] (-->
/usr/lib64/libgfrpc.so.0(+0xca3d)[0x7fc253b46a3d] (-->
/usr/lib64/libgfrpc.so.0(+0xcb5e)[0x7fc253b46b5e] (-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x8b)[0x7fc253b480bb]
(--> /usr/lib64/libgfrpc.so.0(+0xec68)[0x7fc253b48c68] ))))) 0-management:
forced unwinding frame type(glusterd mgmt) op(--(4)) called at 2019-01-15
15:18:36.802613 (xid=0x6da)
[2019-01-15 15:18:36.842994] E [MSGID: 106152]
[glusterd-syncop.c:104:gd_collate_errors] 0-glusterd: Commit failed on
gluster-node2. Please check log file for details.

And here glusterd.log from gluster-node2:

*[2019-01-15 15:18:36.901788] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xdad2a)
[0x7f9fba02cd2a]
-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xda81c)
[0x7f9fba02c81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105)
[0x7f9fc04de0b5] ) 0-management: Ran script:
/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
--volname=integration-archive1 -o cluster.use-compound-fops=on
--gd-workdir=/var/lib/glusterd*
The message "E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler" repeated 35 times between [2019-01-15 15:18:24.832023] and
[2019-01-15 15:18:47.049407]
[2019-01-15 15:18:47.049443] I [MSGID: 106163]
[glusterd-handshake.c:1389:__glusterd_mgmt_hndsk_versions_ack]
0-management: using the op-version 50000
[2019-01-15 15:18:47.053439] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-01-15 15:18:47.053479] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-01-15 15:18:47.059899] I [MSGID: 106490]
[glusterd-handler.c:2586:__glusterd_handle_incoming_friend_req] 0-glusterd:
Received probe from uuid: ffdaa400-82cc-4ada-8ea7-144bf3714269
[2019-01-15 15:18:47.063471] I [MSGID: 106493]
[glusterd-handler.c:3843:glusterd_xfer_friend_add_resp] 0-glusterd:
Responded to fs-lrunning-c1-n1 (0), ret: 0, op_ret: 0
[2019-01-15 15:18:47.066148] I [MSGID: 106492]
[glusterd-handler.c:2771:__glusterd_handle_friend_update] 0-glusterd:
Received friend update from uuid: ffdaa400-82cc-4ada-8ea7-144bf3714269
[2019-01-15 15:18:47.067264] I [MSGID: 106502]
[glusterd-handler.c:2812:__glusterd_handle_friend_update] 0-management:
Received my uuid as Friend
[2019-01-15 15:18:47.078696] I [MSGID: 106493]
[glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management:
Received ACC from uuid: ffdaa400-82cc-4ada-8ea7-144bf3714269
[2019-01-15 15:19:05.377216] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
The message "E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler" repeated 3 times between [2019-01-15 15:19:05.377216] and
[2019-01-15 15:19:06.124297]

Maybe there was only a temporarily network interruption but on the other
side there is a ssl error message in the log file from gluster-node1.
Any ideas?

Regards
David Spisla
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190116/2a05ec8d/attachment.html>


More information about the Gluster-users mailing list