<html><body><div style="font-family: arial, helvetica, sans-serif; font-size: 12pt; color: #000000"><div>Did you find any clue in log files ?<br></div><div><br data-mce-bogus="1"></div><div>I can try an update to 7.5 in case some recent bug were solved, what's your opinion ?<br data-mce-bogus="1"></div><div><br></div><hr id="zwchr" data-marker="__DIVIDER__"><div data-marker="__HEADERS__"><b>De: </b>"Sanju Rakonde" <srakonde@redhat.com><br><b>À: </b>nico@furyweb.fr<br><b>Cc: </b>"gluster-users" <gluster-users@gluster.org><br><b>Envoyé: </b>Mercredi 22 Avril 2020 08:39:42<br><b>Objet: </b>Re: [Gluster-users] never ending logging<br></div><div><br></div><div data-marker="__QUOTED_TEXT__"><div dir="ltr">Thanks for all the information.<br><div>For pstack output, gluster-debuginfo package has to be installed. I will check out the provided information and get back to you.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 22, 2020 at 11:54 AM <<a href="mailto:nico@furyweb.fr" target="_blank">nico@furyweb.fr</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><div>I think all issues are linked with the same underlying problem.<br></div><br><div>1. all peers were in Connected state from every node yesterday but node 2 is "semi-connected" at now:<br></div><div style="padding-left:30px">root@glusterDevVM1:/var/log/glusterfs# gluster peer status<br>Number of Peers: 2<br><br>Hostname: glusterDevVM3<br>Uuid: 0d8a3686-9e37-4ce7-87bf-c85d1ec40974<br>State: Peer in Cluster (Connected)<br><br>Hostname: glusterDevVM2<br>Uuid: 7f6c3023-144b-4db2-9063-d90926dbdd18<br>State: Peer in Cluster (Connected)<br></div><div style="padding-left:30px">root@glusterDevVM2:~# gluster peer status<br>Number of Peers: 2<br><br>Hostname: glusterDevVM1<br>Uuid: e2263e4d-a307-45d5-9cec-e1791f7a45fb<br>State: Peer in Cluster (Disconnected)<br><br>Hostname: glusterDevVM3<br>Uuid: 0d8a3686-9e37-4ce7-87bf-c85d1ec40974<br>State: Peer in Cluster (Connected)<br>root@glusterDevVM3:~# gluster peer status<br>Number of Peers: 2<br><br>Hostname: glusterDevVM2<br>Uuid: 7f6c3023-144b-4db2-9063-d90926dbdd18<br>State: Peer in Cluster (Connected)<br><br>Hostname: glusterDevVM1<br>Uuid: e2263e4d-a307-45d5-9cec-e1791f7a45fb<br>State: Peer in Cluster (Connected)<br></div><div>2, 3, 4. a simple gluster volume status show multiple errors on each node:<br></div><div style="padding-left:30px">root@glusterDevVM1:~# gluster volume status tmp<br>Locking failed on glusterDevVM2. Please check log file for details.<br>root@glusterDevVM2:~# gluster volume status tmp<br>Another transaction is in progress for tmp. Please try again after some time.<br>root@glusterDevVM3:~# gluster volume status tmp<br>Error : Request timed out<br></div><br><div>Logs for each node (except SSL errors):<br></div><div> root@glusterDevVM1:~# egrep -v '0-socket.management' /var/log/glusterfs/glusterd.log </div><div>[2020-04-22 05:38:32.278618] E [rpc-clnt.c:346:saved_frames_unwind] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x138)[0x7fd28d99fda8] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcd97)[0x7fd28d745d97] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcebe)[0x7fd28d745ebe] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3)[0x7fd28d746e93] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xea18)[0x7fd28d747a18] ))))) 0-management: forced unwinding frame type(Peer mgmt) op(--(4)) called at 2020-04-22 05:38:32.243087 (xid=0x8d)<br>[2020-04-22 05:38:32.278638] E [MSGID: 106157] [glusterd-rpc-ops.c:665:__glusterd_friend_update_cbk] 0-management: RPC Error<br>[2020-04-22 05:38:32.278651] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received RJT from uuid: 00000000-0000-0000-0000-000000000000<br>[2020-04-22 05:38:33.256401] I [MSGID: 106498] [glusterd-svc-helper.c:747:__glusterd_send_svc_configure_req] 0-management: not connected yet<br>[2020-04-22 05:38:35.279149] I [socket.c:4347:ssl_setup_connection_params] 0-management: SSL support on the I/O path is ENABLED<br>[2020-04-22 05:38:35.279169] I [socket.c:4350:ssl_setup_connection_params] 0-management: SSL support for glusterd is ENABLED<br>[2020-04-22 05:38:35.279178] I [socket.c:4360:ssl_setup_connection_params] 0-management: using certificate depth 1<br>The message "I [MSGID: 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management: Peer <glusterDevVM2> (<7f6c3023-144b-4db2-9063-d90926dbdd18>), in state <Peer in Cluster>, has disconnected from glusterd." repeated 3 times between [2020-04-22 05:38:25.232116] and [2020-04-22 05:38:35.667153]<br> [2020-04-22 05:38:35.667255] W [glusterd-locks.c:796:glusterd_mgmt_v3_unlock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/7.3/xlator/mgmt/glusterd.so(+0x22119) [0x7fd287da7119] -->/usr/lib/x86_64-linux-gnu/glusterfs/7.3/xlator/mgmt/glusterd.so(+0x2caae) [0x7fd287db1aae] -->/usr/lib/x86_64-linux-gnu/glusterfs/7.3/xlator/mgmt/glusterd.so(+0xdf0d3) [0x7fd287e640d3] ) 0-management: Lock for vol <vol> not held<br> [2020-04-22 05:38:35.667275] W [MSGID: 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management: Lock not released for <vol><br>2 last lines repeated for each volume<br></div><br><div> <div>root@glusterDevVM2:~# egrep -v '0-socket.management' /var/log/glusterfs/glusterd.log</div> </div><div>[2020-04-22 05:51:57.493574] E [rpc-clnt.c:346:saved_frames_unwind] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x138)[0x7f30411dbda8] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcd97)[0x7f3040f81d97] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcebe)[0x7f3040f81ebe] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3)[0x7f3040f82e93] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xea18)[0x7f3040f83a18] ))))) 0-management: forced unwinding frame type(Gluster MGMT Handshake) op(MGMT-VERS(1)) called at 2020-04-22 05:51:57.483579 (xid=0x563)<br>[2020-04-22 05:51:57.493623] E [MSGID: 106167] [glusterd-handshake.c:2040:__glusterd_mgmt_hndsk_version_cbk] 0-management: Error through RPC layer, retry again later<br>[2020-04-22 05:52:00.501474] I [socket.c:4347:ssl_setup_connection_params] 0-management: SSL support on the I/O path is ENABLED<br>[2020-04-22 05:52:00.501542] I [socket.c:4350:ssl_setup_connection_params] 0-management: SSL support for glusterd is ENABLED<br>[2020-04-22 05:52:00.501569] I [socket.c:4360:ssl_setup_connection_params] 0-management: using certificate depth 1<br>[2020-04-22 05:52:00.983720] I [MSGID: 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management: Peer <glusterDevVM1> (<e2263e4d-a307-45d5-9cec-e1791f7a45fb>), in state <Peer in Cluster>, has disconnected from glusterd.<br>[2020-04-22 05:52:00.983886] W [glusterd-locks.c:796:glusterd_mgmt_v3_unlock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/7.3/xlator/mgmt/glusterd.so(+0x22119) [0x7f303b5e3119] -->/usr/lib/x86_64-linux-gnu/glusterfs/7.3/xlator/mgmt/glusterd.so(+0x2caae) [0x7f303b5edaae] -->/usr/lib/x86_64-linux-gnu/glusterfs/7.3/xlator/mgmt/glusterd.so(+0xdf0d3) [0x7f303b6a00d3] ) 0-management: Lock for vol <vol> not held<br>[2020-04-22 05:52:00.983909] W [MSGID: 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management: Lock not released for <vol><br> 2 last lines repeated for each volume<br></div><br><div>root@glusterDevVM3:~# egrep -v '0-socket.management' /var/log/glusterfs/glusterd.log<br>[2020-04-22 05:38:33.229959] I [MSGID: 106499] [glusterd-handler.c:4264:__glusterd_handle_status_volume] 0-management: Received status volume req for volume tmp<br>[2020-04-22 05:41:33.230170] I [glusterd-locks.c:729:gd_mgmt_v3_unlock_timer_cbk] 0-management: unlock timer is cancelled for volume_type tmp_vol<br>[2020-04-22 05:48:34.908289] E [rpc-clnt.c:183:call_bail] 0-management: bailing out frame type(glusterd mgmt v3), op(--(1)), xid = 0x108, unique = 918, sent = 2020-04-22 05:38:33.230268, timeout = 600 for <a href="http://10.5.1.7:24007" target="_blank">10.5.1.7:24007</a><br>[2020-04-22 05:48:34.908339] E [MSGID: 106115] [glusterd-mgmt.c:117:gd_mgmt_v3_collate_errors] 0-management: Locking failed on glusterDevVM1. Please check log file for details.<br>[2020-04-22 05:48:40.288539] E [rpc-clnt.c:183:call_bail] 0-management: bailing out frame type(glusterd mgmt v3), op(--(1)), xid = 0x27, unique = 917, sent = 2020-04-22 05:38:33.230258, timeout = 600 for <a href="http://10.5.1.8:24007" target="_blank">10.5.1.8:24007</a><br>[2020-04-22 05:48:40.288568] E [MSGID: 106115] [glusterd-mgmt.c:117:gd_mgmt_v3_collate_errors] 0-management: Locking failed on glusterDevVM2. Please check log file for details.<br>[2020-04-22 05:48:40.288631] E [MSGID: 106150] [glusterd-syncop.c:1918:gd_sync_task_begin] 0-management: Locking Peers Failed.<br></div><br><div>I'm not familiar with pstack, when running on node 3 (arbiter) I get only these few lines :<br></div><div>root@glusterDevVM3:~# pstack 13700<br><br>13700: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO<br>(No symbols found)<br>0x7fafd747a6cd: ????<br></div><br><div>Which debian stretch package should I install ?</div><br><div>To be more explicit, I stopped glusterd on all 3 nodes then restart sequentially with this order : node1, node3 (arbiter) then node2.<br></div><div>Log files can be dl at <a href="https://www.dropbox.com/s/rcgcw7jrud2wkj1/glusterd-logs.tar.bz2?dl=0" target="_blank">https://www.dropbox.com/s/rcgcw7jrud2wkj1/glusterd-logs.tar.bz2?dl=0</a><br></div><br><div>Thanks for your help.<br></div><br><hr id="gmail-m_3967605818357883676zwchr"><div><b>De: </b>"Sanju Rakonde" <<a href="mailto:srakonde@redhat.com" target="_blank">srakonde@redhat.com</a>><br><b>À: </b><a href="mailto:nico@furyweb.fr" target="_blank">nico@furyweb.fr</a><br><b>Cc: </b>"gluster-users" <<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a>><br><b>Envoyé: </b>Mercredi 22 Avril 2020 07:23:52<br><b>Objet: </b>Re: [Gluster-users] never ending logging<br></div><br><div><div dir="ltr">Hi,<br><div>The email is talking about many issues. Let me ask a few questions to get a whole picture.</div><div>1. are the peers are in the connected state now? or they still in the rejected state?</div><div>2. What led you to see "locking failed" messages? We would like to if there is a reproducer and fix the issue if any.</div><div>3. Another transaction in progress message appears when there is already a operation going on. Are you seeing this when there is no such transaction going on?</div><div>4. When did you hit the timedouts? Did you tried to look at the pstack output of glusterd process? If so, please share the pstack output.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Apr 21, 2020 at 7:08 PM <<a href="mailto:nico@furyweb.fr" target="_blank">nico@furyweb.fr</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi all.<br>
<br>
We're using 3 nodes Gluster 7.3 (2 + 1 arbiter), yesterday node 2 was rejected from cluster and I applied following steps to fix : <a href="https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Resolving%20Peer%20Rejected/" rel="noreferrer" target="_blank">https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Resolving%20Peer%20Rejected/</a><br>
I also saw <a href="https://docs.gluster.org/en/latest/Troubleshooting/troubleshooting-glusterd/" rel="noreferrer" target="_blank">https://docs.gluster.org/en/latest/Troubleshooting/troubleshooting-glusterd/</a> but solution isn't compatible as cluster.max-op-version doesn't exist and all op-version are the same on all 3 nodes.<br>
<br>
After renewing SSL certs and several restart all volumes came back online but glusterd log file on all 3 nodes is filled with nothing else than following 3 lines :<br>
<br>
[2020-04-21 13:05:19.478913] I [socket.c:4347:ssl_setup_connection_params] 0-socket.management: SSL support on the I/O path is ENABLED<br>
[2020-04-21 13:05:19.478972] I [socket.c:4350:ssl_setup_connection_params] 0-socket.management: SSL support for glusterd is ENABLED<br>
[2020-04-21 13:05:19.478986] I [socket.c:4360:ssl_setup_connection_params] 0-socket.management: using certificate depth 1<br>
<br>
Moreover, I have "Locking failed", "Another transaction is in progress" and "Error : Request timed out" on gluster volume status volxxx command.<br>
All SSL certs on clients have also been renewed and all volumes were remounted. All 3 nodes were alternatively restarted (glusterd) and rebooted.<br>
<br>
The cluster is not in production environment but there's about ~250 clients for ~75 volumes, I don't know how to troubleshoot and fix this problem, if anyone has an idea.<br>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr"><div dir="ltr"><div>Thanks,<br></div>Sanju</div></div><br></div></div></div></blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div>Thanks,<br></div>Sanju</div></div><br></div></div></body></html>