[Gluster-users] Error between v3.7.8 and v3.7.0

Kaushal M kshlmster at gmail.com
Tue Feb 23 07:09:49 UTC 2016


The GlusterFS network layer was changed to use unprivileged (>1024)
ports and allow incoming connections from unprivileged ports by
default in 3.7.3.

What this means is that clients/servers lower than 3.7.3 will not
accept connections from newer clients/servers. 3.7.3 and above will
try to connect using unprivileged ports, which will be rejected from
<=3.7.2.

You can find more information on the issue, and workarounds at
https://www.gluster.org/pipermail/gluster-users/2015-August/023116.html

~kaushal


On Sat, Feb 20, 2016 at 11:18 PM, Atin Mukherjee
<atin.mukherjee83 at gmail.com> wrote:
> I do not see any mount related failures in the glusterd log you have pasted.
> Ideally if mount request fails it could be either the GlusterD is down or
> the brick processes are down. There'd be an error log entry from
> mgmt_getspec().
>
> The log entries do indicate that the n/w is unstable. If you are still stuck
> could you provide the mount log and glusterd log please along with gluster
> volume info output and mount command semantics?
>
> -Atin
> Sent from one plus one
>
> On 20-Feb-2016 4:21 pm, "Ml Ml" <mliebherr99 at googlemail.com> wrote:
>>
>> Hello List,
>>
>> i am running ovirt (CentOS) on top of glusterfs. I have a 3 Node
>> replica. Versions see below.
>>
>> Looks like i can not get my node1 (v 3.7.8) together with the othet
>> two (v3.7.0). The error i get when i try to " mount -t glusterfs
>> 10.10.3.7:/RaidVolC /mnt/":
>>
>> [2016-02-20 10:27:30.890701] W [socket.c:869:__socket_keepalive]
>> 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 14, Invalid
>> argument
>> [2016-02-20 10:27:30.890728] E [socket.c:2965:socket_connect]
>> 0-management: Failed to set keep-alive: Invalid argument
>> [2016-02-20 10:27:30.891296] W [socket.c:588:__socket_rwv]
>> 0-management: readv on 10.10.3.7:24007 failed (No data available)
>> [2016-02-20 10:27:30.891671] E [rpc-clnt.c:362:saved_frames_unwind]
>> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7ff82c50bab2]
>> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7ff82c2d68de]
>> (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7ff82c2d69ee]
>> (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7ff82c2d837a]
>> (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7ff82c2d8ba8] )))))
>> 0-management: forced unwinding frame type(GLUSTERD-DUMP) op(DUMP(1))
>> called at 2016-02-20 10:27:30.891063 (xid=0x35)
>> The message "W [MSGID: 106118]
>> [glusterd-handler.c:5149:__glusterd_peer_rpc_notify] 0-management:
>> Lock not released for RaidVolC" repeated 3 times between [2016-02-20
>> 10:27:24.873207] and [2016-02-20 10:27:27.886916]
>> [2016-02-20 10:27:30.891704] E [MSGID: 106167]
>> [glusterd-handshake.c:2074:__glusterd_peer_dump_version_cbk]
>> 0-management: Error through RPC layer, retry again later
>> [2016-02-20 10:27:30.891871] W
>> [glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
>>
>> (-->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
>> [0x7ff821062b9c]
>>
>> -->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
>> [0x7ff82106ce72]
>>
>> -->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
>> [0x7ff82110c73a] ) 0-management: Lock for vol RaidVolB not held
>> [2016-02-20 10:27:30.892001] W
>> [glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
>>
>> (-->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
>> [0x7ff821062b9c]
>>
>> -->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
>> [0x7ff82106ce72]
>>
>> -->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
>> [0x7ff82110c73a] ) 0-management: Lock for vol RaidVolC not held
>> The message "W [MSGID: 106118]
>> [glusterd-handler.c:5149:__glusterd_peer_rpc_notify] 0-management:
>> Lock not released for RaidVolB" repeated 3 times between [2016-02-20
>> 10:27:24.877923] and [2016-02-20 10:27:30.891888]
>> [2016-02-20 10:27:30.892023] W [MSGID: 106118]
>> [glusterd-handler.c:5149:__glusterd_peer_rpc_notify] 0-management:
>> Lock not released for RaidVolC
>> [2016-02-20 10:27:30.895617] W [socket.c:869:__socket_keepalive]
>> 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 14, Invalid
>> argument
>> [2016-02-20 10:27:30.895641] E [socket.c:2965:socket_connect]
>> 0-management: Failed to set keep-alive: Invalid argument
>> [2016-02-20 10:27:30.896300] W [socket.c:588:__socket_rwv]
>> 0-management: readv on 10.10.1.6:24007 failed (No data available)
>> [2016-02-20 10:27:30.896541] E [rpc-clnt.c:362:saved_frames_unwind]
>> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7ff82c50bab2]
>> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7ff82c2d68de]
>> (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7ff82c2d69ee]
>> (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7ff82c2d837a]
>> (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7ff82c2d8ba8] )))))
>> 0-management: forced unwinding frame type(GLUSTERD-DUMP) op(DUMP(1))
>> called at 2016-02-20 10:27:30.895995 (xid=0x35)
>> [2016-02-20 10:27:30.896703] W
>> [glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
>>
>> (-->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
>> [0x7ff821062b9c]
>>
>> -->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
>> [0x7ff82106ce72]
>>
>> -->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
>> [0x7ff82110c73a] ) 0-management: Lock for vol RaidVolB not held
>> [2016-02-20 10:27:30.896584] I [MSGID: 106004]
>> [glusterd-handler.c:5127:__glusterd_peer_rpc_notify] 0-management:
>> Peer <ovirt-node06-stgt.stuttgart.imos.net>
>> (<08884518-2db7-4429-ab2f-019d03a02b76>), in state <Peer in Cluster>,
>> has disconnected from glusterd.
>> [2016-02-20 10:27:30.896720] W [MSGID: 106118]
>> [glusterd-handler.c:5149:__glusterd_peer_rpc_notify] 0-management:
>> Lock not released for RaidVolB
>> [2016-02-20 10:27:30.896854] W
>> [glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
>>
>> (-->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
>> [0x7ff821062b9c]
>>
>> -->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
>> [0x7ff82106ce72]
>>
>> -->/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
>> [0x7ff82110c73a] ) 0-management: Lock for vol RaidVolC not held
>>
>>
>> Any idea what the problem is? I had a network problem which is sloved now.
>> But now i am stuck with this.
>>
>>
>> Node1:
>> ============
>> rpm -qa |grep gluster
>> glusterfs-fuse-3.7.8-1.el7.x86_64
>> glusterfs-3.7.8-1.el7.x86_64
>> glusterfs-cli-3.7.8-1.el7.x86_64
>> glusterfs-client-xlators-3.7.8-1.el7.x86_64
>> glusterfs-rdma-3.7.8-1.el7.x86_64
>> vdsm-gluster-4.16.30-0.el7.centos.noarch
>> glusterfs-api-3.7.8-1.el7.x86_64
>> glusterfs-libs-3.7.8-1.el7.x86_64
>> glusterfs-server-3.7.8-1.el7.x86_64
>>
>>
>> Node2:
>> ==============
>>  rpm -qa |grep gluster
>> glusterfs-fuse-3.7.0-1.el7.x86_64
>> glusterfs-libs-3.7.0-1.el7.x86_64
>> glusterfs-api-3.7.0-1.el7.x86_64
>> glusterfs-cli-3.7.0-1.el7.x86_64
>> glusterfs-server-3.7.0-1.el7.x86_64
>> glusterfs-3.7.0-1.el7.x86_64
>> glusterfs-rdma-3.7.0-1.el7.x86_64
>> vdsm-gluster-4.16.14-0.el7.noarch
>> glusterfs-client-xlators-3.7.0-1.el7.x86_64
>>
>>
>> Node3:
>> =================
>> rpm -qa|grep glus
>> glusterfs-3.7.0-1.el7.x86_64
>> glusterfs-rdma-3.7.0-1.el7.x86_64
>> glusterfs-client-xlators-3.7.0-1.el7.x86_64
>> glusterfs-libs-3.7.0-1.el7.x86_64
>> glusterfs-api-3.7.0-1.el7.x86_64
>> glusterfs-cli-3.7.0-1.el7.x86_64
>> glusterfs-server-3.7.0-1.el7.x86_64
>> vdsm-gluster-4.16.14-0.el7.noarch
>> glusterfs-fuse-3.7.0-1.el7.x86_64
>>
>>
>> Thanks,
>> Mario
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list