[Gluster-users] VolumeOpt Set fails of a freshly created volume

David Spisla spisla80 at gmail.com
Wed Jan 30 11:14:54 UTC 2019


Hello Gluster Community,

today I got the same error messages in glusterd.log when setting volume
options of a freshly created volume. See the log entry:

[2019-01-30 10:15:55.597268] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xdad2a)
[0x7f08ce71ed2a]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xda81c)
[0x7f08ce71e81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105)
[0x7f08d4bd0575] ) 0-management: Ran script:
/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
--volname=integration-archive1 -o cluster.lookup-optimize=on
--gd-workdir=/var/lib/glusterd
*[2019-01-30 10:15:55.806303] W [socket.c:719:__socket_rwv] 0-management:
readv on 10.10.12.102:24007 <http://10.10.12.102:24007> failed
(Input/output error)*
*[2019-01-30 10:15:55.806344] E [socket.c:246:ssl_dump_error_stack]
0-management:   error:140943F2:SSL routines:ssl3_read_bytes:sslv3 alert
unexpected messag*e
The message "E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler" repeated 51 times between [2019-01-30 10:15:51.659656] and
[2019-01-30 10:15:55.635151]
[2019-01-30 10:15:55.806370] I [MSGID: 106004]
[glusterd-handler.c:6430:__glusterd_peer_rpc_notify] 0-management: Peer
<fs-lrunning-c2-n2> (<ccd0137f-07d8-4e26-a168-b77af79a36af>), in state
<Peer in Cluster>, has disconnected from glusterd.
[2019-01-30 10:15:55.806487] W
[glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349)
[0x7f08ce668349]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950)
[0x7f08ce671950]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0239)
[0x7f08ce724239] ) 0-management: Lock for vol archive1 not held
[2019-01-30 10:15:55.806505] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for archive1
[2019-01-30 10:15:55.806522] W
[glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349)
[0x7f08ce668349]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950)
[0x7f08ce671950]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0239)
[0x7f08ce724239] ) 0-management: Lock for vol archive2 not held
[2019-01-30 10:15:55.806529] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for archive2
[2019-01-30 10:15:55.806543] W
[glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349)
[0x7f08ce668349]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950)
[0x7f08ce671950]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0239)
[0x7f08ce724239] ) 0-management: Lock for vol gluster_shared_storage not
held
[2019-01-30 10:15:55.806553] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for gluster_shared_storage
[2019-01-30 10:15:55.806576] W
[glusterd-locks.c:806:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349)
[0x7f08ce668349]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950)
[0x7f08ce671950]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0074)
[0x7f08ce724074] ) 0-management: Lock owner mismatch. Lock for vol
integration-archive1 held by 451b6e04-5098-4a35-a312-edbb0d8328a0
[2019-01-30 10:15:55.806584] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for integration-archive1
[2019-01-30 10:15:55.806846] E [rpc-clnt.c:346:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x17d)[0x7f08d4b8122d] (-->
/usr/lib64/libgfrpc.so.0(+0xca3d)[0x7f08d4948a3d] (-->
/usr/lib64/libgfrpc.so.0(+0xcb5e)[0x7f08d4948b5e] (-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x8b)[0x7f08d494a0bb]
(--> /usr/lib64/libgfrpc.so.0(+0xec68)[0x7f08d494ac68] ))))) 0-management:
forced unwinding frame type(glusterd mgmt v3) op(--(1)) called at
2019-01-30 10:15:55.804680 (xid=0x1ae)
[2019-01-30 10:15:55.806865] E [MSGID: 106115]
[glusterd-mgmt.c:116:gd_mgmt_v3_collate_errors] 0-management: Locking
failed on fs-lrunning-c2-n2. Please check log file for details.
[2019-01-30 10:15:55.806914] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-01-30 10:15:55.806898] E [MSGID: 106150]
[glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Locking Peers
Failed.
The message "E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler" repeated 4 times between [2019-01-30 10:15:55.806914] and
[2019-01-30 10:15:56.322122]
[2019-01-30 10:15:56.322287] E [MSGID: 106529]
[glusterd-volume-ops.c:1916:glusterd_op_stage_delete_volume] 0-management:
Some of the peers are down
[2019-01-30 10:15:56.322319] E [MSGID: 106301]
[glusterd-syncop.c:1308:gd_stage_op_phase] 0-management: Staging of
operation 'Volume Delete' failed on localhost : Some of the peers are down

Again my peer "fs-lrunning-c2-n2" is not connected and again there is a ssl
error message. @Milind Changire Any idea if this ssl error has an relation
to the peer disconnect problem? Or is there any problem with the
Portmapping in Glusterv5.x?

Regards
David Spisla

Am Do., 17. Jan. 2019 um 03:42 Uhr schrieb Atin Mukherjee <
amukherj at redhat.com>:

>
>
> On Wed, Jan 16, 2019 at 9:48 PM David Spisla <spisla80 at gmail.com> wrote:
>
>> Dear Gluster Community,
>>
>> i created a replica 4 volume from gluster-node1 on a 4-Node Cluster with
>> SSL/TLS network encryption . During setting the 'cluster.use-compound-fops'
>> option, i got the error:
>>
>> $  volume set: failed: Commit failed on gluster-node2. Please check log
>> file for details.
>>
>> Here is the glusterd.log from gluster-node1:
>>
>> *[2019-01-15 15:18:36.813034] I [run.c:242:runner_log]
>> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xdad2a)
>> [0x7fc24d91cd2a]
>> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xda81c)
>> [0x7fc24d91c81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105)
>> [0x7fc253dce0b5] ) 0-management: Ran script:
>> /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
>> --volname=integration-archive1 -o cluster.use-compound-fops=on
>> --gd-workdir=/var/lib/glusterd*
>> [2019-01-15 15:18:36.821193] I [run.c:242:runner_log]
>> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xdad2a)
>> [0x7fc24d91cd2a]
>> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xda81c)
>> [0x7fc24d91c81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105)
>> [0x7fc253dce0b5] ) 0-management: Ran script:
>> /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
>> --volname=integration-archive1 -o cluster.use-compound-fops=on
>> --gd-workdir=/var/lib/glusterd
>> [2019-01-15 15:18:36.842383] W [socket.c:719:__socket_rwv] 0-management:
>> readv on 10.10.12.42:24007 failed (Input/output error)
>> *[2019-01-15 15:18:36.842415] E [socket.c:246:ssl_dump_error_stack]
>> 0-management:   error:140943F2:SSL routines:ssl3_read_bytes:sslv3 alert
>> unexpected message*
>> The message "E [MSGID: 101191]
>> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
>> handler" repeated 81 times between [2019-01-15 15:18:30.735508] and
>> [2019-01-15 15:18:36.808994]
>> [2019-01-15 15:18:36.842439] I [MSGID: 106004]
>> [glusterd-handler.c:6430:__glusterd_peer_rpc_notify] 0-management: Peer <
>> gluster-node2> (<02724bb6-cb34-4ec3-8306-c2950e0acf9b>), in state <Peer
>> in Cluster>, has disconnected from glusterd.
>>
>
> The above shows there was a peer disconnect event received from
> gluster-node2 and this sequence might have happened while the commit
> operation was in-flight and hence the volume set failed on gluster-node2.
> Related to ssl error, I'd request Milind to comment.
>
> [2019-01-15 15:18:36.842638] W
>> [glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
>> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
>> [0x7fc24d866349]
>> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
>> [0x7fc24d86f950]
>> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239)
>> [0x7fc24d922239] ) 0-management: Lock for vol archive1 not held
>> [2019-01-15 15:18:36.842656] W [MSGID: 106117]
>> [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
>> released for archive1
>> [2019-01-15 15:18:36.842674] W
>> [glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
>> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
>> [0x7fc24d866349]
>> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
>> [0x7fc24d86f950]
>> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239)
>> [0x7fc24d922239] ) 0-management: Lock for vol archive2 not held
>> [2019-01-15 15:18:36.842680] W [MSGID: 106117]
>> [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
>> released for archive2
>> [2019-01-15 15:18:36.842694] W
>> [glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
>> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
>> [0x7fc24d866349]
>> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
>> [0x7fc24d86f950]
>> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239)
>> [0x7fc24d922239] ) 0-management: Lock for vol gluster_shared_storage not
>> held
>> [2019-01-15 15:18:36.842702] W [MSGID: 106117]
>> [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
>> released for gluster_shared_storage
>> [2019-01-15 15:18:36.842719] W
>> [glusterd-locks.c:806:glusterd_mgmt_v3_unlock]
>> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
>> [0x7fc24d866349]
>> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
>> [0x7fc24d86f950]
>> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0074)
>> [0x7fc24d922074] ) 0-management: Lock owner mismatch. Lock for vol
>> integration-archive1 held by ffdaa400-82cc-4ada-8ea7-144bf3714269
>> [2019-01-15 15:18:36.842727] W [MSGID: 106117]
>> [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
>> released for integration-archive1
>> [2019-01-15 15:18:36.842970] E [rpc-clnt.c:346:saved_frames_unwind] (-->
>> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x17d)[0x7fc253d7f18d] (-->
>> /usr/lib64/libgfrpc.so.0(+0xca3d)[0x7fc253b46a3d] (-->
>> /usr/lib64/libgfrpc.so.0(+0xcb5e)[0x7fc253b46b5e] (-->
>> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x8b)[0x7fc253b480bb]
>> (--> /usr/lib64/libgfrpc.so.0(+0xec68)[0x7fc253b48c68] ))))) 0-management:
>> forced unwinding frame type(glusterd mgmt) op(--(4)) called at 2019-01-15
>> 15:18:36.802613 (xid=0x6da)
>> [2019-01-15 15:18:36.842994] E [MSGID: 106152]
>> [glusterd-syncop.c:104:gd_collate_errors] 0-glusterd: Commit failed on
>> gluster-node2. Please check log file for details.
>>
>> And here glusterd.log from gluster-node2:
>>
>> *[2019-01-15 15:18:36.901788] I [run.c:242:runner_log]
>> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xdad2a)
>> [0x7f9fba02cd2a]
>> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xda81c)
>> [0x7f9fba02c81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105)
>> [0x7f9fc04de0b5] ) 0-management: Ran script:
>> /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
>> --volname=integration-archive1 -o cluster.use-compound-fops=on
>> --gd-workdir=/var/lib/glusterd*
>> The message "E [MSGID: 101191]
>> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
>> handler" repeated 35 times between [2019-01-15 15:18:24.832023] and
>> [2019-01-15 15:18:47.049407]
>> [2019-01-15 15:18:47.049443] I [MSGID: 106163]
>> [glusterd-handshake.c:1389:__glusterd_mgmt_hndsk_versions_ack]
>> 0-management: using the op-version 50000
>> [2019-01-15 15:18:47.053439] E [MSGID: 101191]
>> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
>> handler
>> [2019-01-15 15:18:47.053479] E [MSGID: 101191]
>> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
>> handler
>> [2019-01-15 15:18:47.059899] I [MSGID: 106490]
>> [glusterd-handler.c:2586:__glusterd_handle_incoming_friend_req] 0-glusterd:
>> Received probe from uuid: ffdaa400-82cc-4ada-8ea7-144bf3714269
>> [2019-01-15 15:18:47.063471] I [MSGID: 106493]
>> [glusterd-handler.c:3843:glusterd_xfer_friend_add_resp] 0-glusterd:
>> Responded to fs-lrunning-c1-n1 (0), ret: 0, op_ret: 0
>> [2019-01-15 15:18:47.066148] I [MSGID: 106492]
>> [glusterd-handler.c:2771:__glusterd_handle_friend_update] 0-glusterd:
>> Received friend update from uuid: ffdaa400-82cc-4ada-8ea7-144bf3714269
>> [2019-01-15 15:18:47.067264] I [MSGID: 106502]
>> [glusterd-handler.c:2812:__glusterd_handle_friend_update] 0-management:
>> Received my uuid as Friend
>> [2019-01-15 15:18:47.078696] I [MSGID: 106493]
>> [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management:
>> Received ACC from uuid: ffdaa400-82cc-4ada-8ea7-144bf3714269
>> [2019-01-15 15:19:05.377216] E [MSGID: 101191]
>> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
>> handler
>> The message "E [MSGID: 101191]
>> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
>> handler" repeated 3 times between [2019-01-15 15:19:05.377216] and
>> [2019-01-15 15:19:06.124297]
>>
>> Maybe there was only a temporarily network interruption but on the other
>> side there is a ssl error message in the log file from gluster-node1.
>> Any ideas?
>>
>> Regards
>> David Spisla
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190130/5833590c/attachment.html>


More information about the Gluster-users mailing list