[Gluster-users] Cannot upgrade from 3.6.3 to 3.7.3

Andreas Mather andreas at allaboutapps.at
Sat Aug 29 06:12:39 UTC 2015


Hi!

Did you mean the "option rpc-auth-allow-insecure on" setting?
>
Yes, exactly. I've also applied the "volume set server.allow-insecure on"
command, but I doubt that this is helped or is active at all, since I've
never restarted the volume itself, just single nodes.


-- Andreas


On Fri, Aug 28, 2015 at 10:35 PM, Alastair Neil <ajneil.tech at gmail.com>
wrote:

> Did you mean the "option rpc-auth-allow-insecure on" setting?  I just did
> a rolling upgrade from 3.6 to 3.7 without issue, however, I had enabled
> insecure connections because I had some clients running 3.7.
>
> -Alastair
>
>
> On 27 August 2015 at 10:04, Andreas Mather <andreas at allaboutapps.at>
> wrote:
>
>> Hi Humble!
>>
>> Thanks for the reply. The docs do not mention anything related to
>> 3.6->3.7 upgrade that applies to my case.
>>
>> I could resolve the issue in the meantime by doing the steps mentioned in
>> the 3.7.1 release notes (
>> https://gluster.readthedocs.org/en/latest/release-notes/3.7.1/).
>>
>> Thanks,
>>
>> Andreas
>>
>>
>> On Thu, Aug 27, 2015 at 3:22 PM, Humble Devassy Chirammal <
>> humble.devassy at gmail.com> wrote:
>>
>>> Hi Andreas,
>>>
>>> >
>>> Is it even possible to perform a rolling upgrade?
>>> >
>>>
>>>
>>> The GlusterFS upgrade process is documented  @
>>> https://gluster.readthedocs.org/en/latest/Upgrade-Guide/README/
>>>
>>>
>>>
>>> --Humble
>>>
>>>
>>> On Thu, Aug 27, 2015 at 4:57 PM, Andreas Mather <andreas at allaboutapps.at
>>> > wrote:
>>>
>>>> Hi All!
>>>>
>>>> I wanted to do a rolling upgrade of gluster from 3.6.3 to 3.7.3, but
>>>> after the upgrade, the updated node won't connect.
>>>>
>>>> The cluster has 4 nodes (vhost[1-4]) and 4 volumes (vol[1-4]) with 2
>>>> replicas each:
>>>> vol1: vhost1/brick1, vhost2/brick2
>>>> vol2: vhost2/brick1, vhost1/brick2
>>>> vol3: vhost3/brick1, vhost4/brick2
>>>> vol4: vhost4/brick1, vhost3/brick2
>>>>
>>>> I'm trying to start the upgrade on vhost4. After restarting glusterd,
>>>> peer status shows all other peers as disconnected, the log has repeated
>>>> entries like this:
>>>>
>>>> [2015-08-27 10:59:56.982254] E [MSGID: 106167]
>>>> [glusterd-handshake.c:2078:__glusterd_peer_dump_version_cbk] 0-management:
>>>> Error through RPC layer, retry again later
>>>> [2015-08-27 10:59:56.982335] E [rpc-clnt.c:362:saved_frames_unwind]
>>>> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7f1a7a9579e6] (-->
>>>> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f1a7a7229be] (-->
>>>> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f1a7a722ace] (-->
>>>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x9c)[0x7f1a7a72447c] (-->
>>>> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x48)[0x7f1a7a724c38] )))))
>>>> 0-management: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at
>>>> 2015-08-27 10:59:56.981550 (xid=0x2)
>>>> [2015-08-27 10:59:56.982346] W [rpc-clnt-ping.c:204:rpc_clnt_ping_cbk]
>>>> 0-management: socket disconnected
>>>> [2015-08-27 10:59:56.982359] I [MSGID: 106004]
>>>> [glusterd-handler.c:5051:__glusterd_peer_rpc_notify] 0-management: Peer
>>>> <vhost3-int> (<72e2078d-1ed9-4cdd-aad2-c86e418746d1>), in state <Peer in
>>>> Cluster>, has disconnected from glusterd.
>>>> [2015-08-27 10:59:56.982491] W
>>>> [glusterd-locks.c:677:glusterd_mgmt_v3_unlock] (-->
>>>> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7f1a7a9579e6] (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x541)[0x7f1a6f55ee91]
>>>> (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)[0x7f1a6f4c6972]
>>>> (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)[0x7f1a6f4bc90c]
>>>> (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x90)[0x7f1a7a724c80] )))))
>>>> 0-management: Lock for vol vol1 not held
>>>> [2015-08-27 10:59:56.982504] W [MSGID: 106118]
>>>> [glusterd-handler.c:5073:__glusterd_peer_rpc_notify] 0-management: Lock not
>>>> released for vol1
>>>> [2015-08-27 10:59:56.982608] W
>>>> [glusterd-locks.c:677:glusterd_mgmt_v3_unlock] (-->
>>>> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7f1a7a9579e6] (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x541)[0x7f1a6f55ee91]
>>>> (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)[0x7f1a6f4c6972]
>>>> (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)[0x7f1a6f4bc90c]
>>>> (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x90)[0x7f1a7a724c80] )))))
>>>> 0-management: Lock for vol vol2 not held
>>>> [2015-08-27 10:59:56.982618] W [MSGID: 106118]
>>>> [glusterd-handler.c:5073:__glusterd_peer_rpc_notify] 0-management: Lock not
>>>> released for vol2
>>>> [2015-08-27 10:59:56.982728] W
>>>> [glusterd-locks.c:677:glusterd_mgmt_v3_unlock] (-->
>>>> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7f1a7a9579e6] (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x541)[0x7f1a6f55ee91]
>>>> (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)[0x7f1a6f4c6972]
>>>> (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)[0x7f1a6f4bc90c]
>>>> (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x90)[0x7f1a7a724c80] )))))
>>>> 0-management: Lock for vol vol3 not held
>>>> [2015-08-27 10:59:56.982739] W [MSGID: 106118]
>>>> [glusterd-handler.c:5073:__glusterd_peer_rpc_notify] 0-management: Lock not
>>>> released for vol3
>>>> [2015-08-27 10:59:56.982844] W
>>>> [glusterd-locks.c:677:glusterd_mgmt_v3_unlock] (-->
>>>> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7f1a7a9579e6] (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x541)[0x7f1a6f55ee91]
>>>> (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)[0x7f1a6f4c6972]
>>>> (-->
>>>> /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)[0x7f1a6f4bc90c]
>>>> (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x90)[0x7f1a7a724c80] )))))
>>>> 0-management: Lock for vol vol4 not held
>>>> [2015-08-27 10:59:56.982858] W [MSGID: 106118]
>>>> [glusterd-handler.c:5073:__glusterd_peer_rpc_notify] 0-management: Lock not
>>>> released for vol4
>>>> [2015-08-27 10:59:56.982881] W [socket.c:642:__socket_rwv]
>>>> 0-management: readv on 192.168.92.2:24007 failed (Connection reset by
>>>> peer)
>>>> [2015-08-27 10:59:56.982974] E [rpc-clnt.c:362:saved_frames_unwind]
>>>> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7f1a7a9579e6] (-->
>>>> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f1a7a7229be] (-->
>>>> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f1a7a722ace] (-->
>>>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x9c)[0x7f1a7a72447c] (-->
>>>> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x48)[0x7f1a7a724c38] )))))
>>>> 0-management: forced unwinding frame type(GLUSTERD-DUMP) op(DUMP(1)) called
>>>> at 2015-08-27 10:59:56.981566 (xid=0x1)
>>>>
>>>>
>>>> Any ideas? Is it even possible to perform a rolling upgrade?
>>>>
>>>> Thanks for any help!
>>>>
>>>> Andreas
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150829/e6b7dce6/attachment.html>


More information about the Gluster-users mailing list