<div dir="ltr">Hi,<div><br></div><div>We need some more information in order to debug this.</div><div>The version of Gluster you were running before the upgrade</div><div>The output of gluster volume info <volname></div><div>The brick logs for the volume when the operation is performed.</div><div><br></div><div>Regards,</div><div>Nithya</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 2 May 2018 at 15:19, Hoggins! <span dir="ltr"><<a href="mailto:fuckspam@wheres5.com" target="_blank">fuckspam@wheres5.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello list,<br>
<br>
I have an issue on my Gluster cluster. It is composed of two data nodes<br>
and an arbiter for all my volumes.<br>
<br>
After having upgraded my bricks to gluster 3.12.9 (Fedora 27), this is<br>
what I get :<br>
<br>
- on node 1, volumes won't start, and glusterd.log shows a lot of :<br>
[2018-05-02 09:46:06.267817] W<br>
[glusterd-locks.c:843:<wbr>glusterd_mgmt_v3_unlock]<br>
(-->/usr/lib64/glusterfs/3.12.<wbr>9/xlator/mgmt/glusterd.so(+<wbr>0x22549)<br>
[0x7f0047ae2549]<br>
-->/usr/lib64/glusterfs/3.12.<wbr>9/xlator/mgmt/glusterd.so(+<wbr>0x2bdf0)<br>
[0x7f0047aebdf0]<br>
-->/usr/lib64/glusterfs/3.12.<wbr>9/xlator/mgmt/glusterd.so(+<wbr>0xd8371)<br>
[0x7f0047b98371] ) 0-management: Lock for vol thedude not held<br>
The message "W [MSGID: 106118]<br>
[glusterd-handler.c:6342:__<wbr>glusterd_peer_rpc_notify] 0-management: Lock<br>
not released for rom" repeated 3 times between [2018-05-02<br>
09:45:57.262321] and [2018-05-02 09:46:06.267804]<br>
[2018-05-02 09:46:06.267826] W [MSGID: 106118]<br>
[glusterd-handler.c:6342:__<wbr>glusterd_peer_rpc_notify] 0-management: Lock<br>
not released for thedude<br>
<br>
<br>
- on node 2, volume are up but don't seem to be willing to correctly<br>
heal. The logs show a lot of :<br>
[2018-05-02 09:23:01.054196] I [MSGID: 108026]<br>
[afr-self-heal-entry.c:887:<wbr>afr_selfheal_entry_do] 0-thedude-replicate-0:<br>
performing entry selfheal on 4dc0ae36-c365-4fc7-b44c-<wbr>d717392c7bd3<br>
[2018-05-02 09:23:01.222596] E [MSGID: 114031]<br>
[client-rpc-fops.c:233:<wbr>client3_3_mknod_cbk] 0-thedude-client-2: remote<br>
operation failed. Path: <gfid:74ea4c57-61e5-4674-96e4-<wbr>51356dd710db> [No<br>
space left on device]<br>
<br>
<br>
- on arbiter, glustershd.log shows a lot of :<br>
[2018-05-02 09:44:54.619476] I [MSGID: 108026]<br>
[afr-self-heal-entry.c:887:<wbr>afr_selfheal_entry_do] 0-web-replicate-0:<br>
performing entry selfheal on 146a9a84-3db1-42ef-828e-<wbr>0e4131af3667<br>
[2018-05-02 09:44:54.640276] E [MSGID: 114031]<br>
[client-rpc-fops.c:295:<wbr>client3_3_mkdir_cbk] 0-web-client-2: remote<br>
operation failed. Path: <gfid:47b16567-9acc-454b-b20f-<wbr>9821e6f1d420> [No<br>
space left on device]<br>
[2018-05-02 09:44:54.657045] I [MSGID: 108026]<br>
[afr-self-heal-entry.c:887:<wbr>afr_selfheal_entry_do] 0-web-replicate-0:<br>
performing entry selfheal on 9f9122ed-2794-4ed1-91db-<wbr>be0c7fe89389<br>
[2018-05-02 09:47:09.121060] W [MSGID: 101088]<br>
[common-utils.c:4166:gf_<wbr>backtrace_save] 0-mailer-replicate-0: Failed to<br>
save the backtrace.<br>
<br>
<br>
The clients connecting to the cluster experience problems, such as<br>
Gluster refusing to create files, etc.<br>
<br>
I'm lost here, where should I start ?<br>
<br>
Thanks for your help !<br>
<span class="HOEnZb"><font color="#888888"><br>
Hoggins!<br>
<br>
</font></span><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>