<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 14, 2017 at 12:58 AM, Ben Werthmann <span dir="ltr"><<a href="mailto:ben@apcera.com" target="_blank">ben@apcera.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>I ran into something like this in 3.10.4 and filed two bugs for it:<br><br><a href="https://bugzilla.redhat.com/show_bug.cgi?id=1491059" target="_blank">https://bugzilla.redhat.com/<wbr>show_bug.cgi?id=1491059</a><br><a href="https://bugzilla.redhat.com/show_bug.cgi?id=1491060" target="_blank">https://bugzilla.redhat.com/<wbr>show_bug.cgi?id=1491060</a></div><div><br></div><div>Please see the above bugs for full detail.<br></div><div><br></div>In summary, my issue was related to glusterd's pid handling of pid files when is starts self-heal and bricks. The issues are:<br><br><pre class="m_5771768308393841601gmail-bz_comment_text m_5771768308393841601gmail-bz_wrap_comment_text" id="m_5771768308393841601gmail-comment_text_0">a. brick pid file leaves stale pid and brick fails to start when glusterd is started. pid files are stored in `/var/lib/glusterd` which persists across reboots. When glusterd is started (or restarted or host rebooted) and the pid of any process matching the pid in the brick pid file, brick fails to start.<br><br>b. self-heal-deamon pid file leave stale pid and indiscriminately kills pid when glusterd is started. pid files are stored in `/var/lib/glusterd` which persists across reboots. When glusterd is started (or restarted or host rebooted) the pid of any process matching the pid in the shd pid file is killed.<br><br>due to the nature of these bugs sometimes bricks/shd will start, sometimes they will not, restart success may be intermittent. This bug is most likely to occur when services were running with a low pid, then the host is rebooted since reboots tend to densely group pids in lower pid numbers. You might also see it if you have high pid churn due to short lived processes.<br><br>In the case of self-heal daemon, you may also see other processes "randomly" being terminated.</pre>resulting in:<br><div><br><pre class="m_5771768308393841601gmail-bz_comment_text m_5771768308393841601gmail-bz_wrap_comment_text" id="m_5771768308393841601gmail-comment_text_0">1a. pid file /var/lib/glusterd/glustershd/<wbr>run/glustershd.pid remains after shd is stopped
2a. glusterd kills any process number in the stale shd pid file.<br>1b. brick pid file(s) remain after brick is stopped
2b. glusterd fails to start brick when the pid in a pid file matches any running process<br><br>Workaround:<br><br>in our automation, when we stop all gluster processes (reboot, upgrade, etc.) we ensure all processes are stopped and then cleanup the pids with:<br>'find /var/lib/glusterd/ -name '*pid' -delete'<br></pre></div></div></blockquote><div><br></div><div>I've added comment in both the bugs. Good news is that this is already fixed in 3.12.0. <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><pre class="m_5771768308393841601gmail-bz_comment_text m_5771768308393841601gmail-bz_wrap_comment_text" id="m_5771768308393841601gmail-comment_text_0"><br></pre><pre class="m_5771768308393841601gmail-bz_comment_text m_5771768308393841601gmail-bz_wrap_comment_text" id="m_5771768308393841601gmail-comment_text_0">This is not a complete solution, but works in our most critical times. We may develop something more complete if the bug is not addressed promptly.<br></pre><pre class="m_5771768308393841601gmail-bz_comment_text m_5771768308393841601gmail-bz_wrap_comment_text" id="m_5771768308393841601gmail-comment_text_0"><br></pre><br></div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="h5">On Sat, Aug 5, 2017 at 11:54 PM, Leonid Isaev <span dir="ltr"><<a href="mailto:leonid.isaev@jila.colorado.edu" target="_blank">leonid.isaev@jila.colorado.<wbr>edu</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">Hi,<br>
<br>
I have a distributed volume which runs on Fedora 26 systems with<br>
glusterfs 3.11.2 from <a href="http://gluster.org" rel="noreferrer" target="_blank">gluster.org</a> repos:<br>
----------<br>
[root@taupo ~]# glusterd --version<br>
glusterfs 3.11.2<br>
<br>
gluster> volume info gv2<br>
Volume Name: gv2<br>
Type: Distribute<br>
Volume ID: 6b468f43-3857-4506-917c-7eaaae<wbr>f9b6ee<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 6<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: kiwi:/srv/gluster/gv2/brick1/g<wbr>vol<br>
Brick2: kiwi:/srv/gluster/gv2/brick2/g<wbr>vol<br>
Brick3: taupo:/srv/gluster/gv2/brick1/<wbr>gvol<br>
Brick4: fox:/srv/gluster/gv2/brick1/gv<wbr>ol<br>
Brick5: fox:/srv/gluster/gv2/brick2/gv<wbr>ol<br>
Brick6: logan:/srv/gluster/gv2/brick1/<wbr>gvol<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
nfs.disable: on<br>
<br>
gluster> volume status gv2<br>
Status of volume: gv2<br>
Gluster process TCP Port RDMA Port Online Pid<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick kiwi:/srv/gluster/gv2/brick1/g<wbr>vol 49152 0 Y 1128<br>
Brick kiwi:/srv/gluster/gv2/brick2/g<wbr>vol 49153 0 Y 1134<br>
Brick taupo:/srv/gluster/gv2/brick1/<wbr>gvol N/A N/A N N/A<br>
Brick fox:/srv/gluster/gv2/brick1/gv<wbr>ol 49152 0 Y 1169<br>
Brick fox:/srv/gluster/gv2/brick2/gv<wbr>ol 49153 0 Y 1175<br>
Brick logan:/srv/gluster/gv2/brick1/<wbr>gvol 49152 0 Y 1003<br>
----------<br>
<br>
The machine in question is TAUPO which has one brick that refuses to connect to<br>
the cluster. All installations were migrated from glusterfs 3.8.14 on Fedora<br>
24: I simply rsync'ed /var/lib/glusterd to new systems. On all other machines<br>
glusterd starts fine and all bricks come up. Hence I suspect a race condition<br>
somewhere. The glusterd.log file (attached) shows that the brick connects, and<br>
then suddenly disconnects from the cluster:<br>
----------<br>
[2017-08-06 03:12:38.536409] I [glusterd-utils.c:5468:gluster<wbr>d_brick_start] 0-management: discovered already-running brick /srv/gluster/gv2/brick1/gvol<br>
[2017-08-06 03:12:38.536414] I [MSGID: 106143] [glusterd-pmap.c:279:pmap_regi<wbr>stry_bind] 0-pmap: adding brick /srv/gluster/gv2/brick1/gvol on port 49153<br>
[2017-08-06 03:12:38.536427] I [rpc-clnt.c:1059:rpc_clnt_conn<wbr>ection_init] 0-management: setting frame-timeout to 600<br>
[2017-08-06 03:12:38.536500] I [rpc-clnt.c:1059:rpc_clnt_conn<wbr>ection_init] 0-snapd: setting frame-timeout to 600<br>
[2017-08-06 03:12:38.536556] I [rpc-clnt.c:1059:rpc_clnt_conn<wbr>ection_init] 0-snapd: setting frame-timeout to 600<br>
[2017-08-06 03:12:38.536616] I [MSGID: 106492] [glusterd-handler.c:2717:__glu<wbr>sterd_handle_friend_update] 0-glusterd: Received friend update from uuid: d5a487e3-4c9b-4e5a-91ff-b8d85f<wbr>d51da9<br>
[2017-08-06 03:12:38.584598] I [MSGID: 106502] [glusterd-handler.c:2762:__glu<wbr>sterd_handle_friend_update] 0-management: Received my uuid as Friend<br>
[2017-08-06 03:12:38.599340] I [socket.c:2474:socket_event_ha<wbr>ndler] 0-transport: EPOLLERR - disconnecting now<br>
[2017-08-06 03:12:38.613745] I [MSGID: 106005] [glusterd-handler.c:5846:__glu<wbr>sterd_brick_rpc_notify] 0-management: Brick taupo:/srv/gluster/gv2/brick1/<wbr>gvol has disconnected from glusterd.<br>
----------<br>
<br>
I checked that cluster.brick-multiplex is off. How can I debug this further?<br>
<br>
Thanks in advance,<br>
<span class="m_5771768308393841601HOEnZb"><font color="#888888">--<br>
Leonid Isaev<br>
</font></span><br></div></div>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br></blockquote></div><br></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div></div>