[Gluster-users] locking in 3.6.1
Atin Mukherjee
amukherj at redhat.com
Tue Nov 25 04:56:38 UTC 2014
Scott,
Can you please find/point out the first instance of the command and its
associated glusterd log which failed to acquire the cluster wide lock.
There are few cases related to rebalance commands where we may end up
having stale locks, have you performed rebalance in between?
~Atin
On 11/25/2014 09:16 AM, Pranith Kumar Karampuri wrote:
> +glusterd folks.
>
> Pranith
> On 11/25/2014 02:41 AM, Scott Merrill wrote:
>> After upgrading to Gluster 3.6.1, I'm seeing a lot more (stale?) locks
>> between my replicated servers. This prevents me from executing commands
>> on either server.
>>
>> From one server:
>> root at gluster1:PRODUCTION:~> gluster volume status epa
>> Locking failed on gluster2.domain.local. Please check log file for
>> details.
>> root at gluster1:PRODUCTION:~> gluster volume status store
>> Locking failed on gluster2.domain.local. Please check log file for
>> details.
>>
>>
>> From the server that reports the lock:
>> root at gluster2:PRODUCTION:~> gluster volume status epa
>> Another transaction is in progress. Please try again after sometime.
>> root at gluster2:PRODUCTION:~> gluster volume status store
>> Another transaction is in progress. Please try again after sometime.
>>
>>
>> I dutifully consulted the logs, as instructed, first from one server:
>> root at gluster1:PRODUCTION:~> tail
>> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
>> [2014-11-24 21:06:34.402067] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/3714f2b1aabf9be7087fc323824b74dd.socket failed
>> (Invalid argument)
>> [2014-11-24 21:06:37.402400] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/3714f2b1aabf9be7087fc323824b74dd.socket failed
>> (Invalid argument)
>> [2014-11-24 21:06:40.402722] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/3714f2b1aabf9be7087fc323824b74dd.socket failed
>> (Invalid argument)
>> [2014-11-24 21:06:43.403091] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/3714f2b1aabf9be7087fc323824b74dd.socket failed
>> (Invalid argument)
>> [2014-11-24 21:06:46.403416] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/3714f2b1aabf9be7087fc323824b74dd.socket failed
>> (Invalid argument)
>> [2014-11-24 21:06:49.403757] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/3714f2b1aabf9be7087fc323824b74dd.socket failed
>> (Invalid argument)
>> [2014-11-24 21:06:52.404099] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/3714f2b1aabf9be7087fc323824b74dd.socket failed
>> (Invalid argument)
>> [2014-11-24 21:06:55.404440] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/3714f2b1aabf9be7087fc323824b74dd.socket failed
>> (Invalid argument)
>> [2014-11-24 21:06:58.404770] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/3714f2b1aabf9be7087fc323824b74dd.socket failed
>> (Invalid argument)
>> The message "I [MSGID: 106006]
>> [glusterd-handler.c:4257:__glusterd_nodesvc_rpc_notify] 0-management:
>> nfs has disconnected from glusterd." repeated 39 times between
>> [2014-11-24 21:05:01.391554] and [2014-11-24 21:06:58.405175]
>>
>>
>> and then from the other:
>> root at gluster2:PRODUCTION:~> tail
>> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
>> [2014-11-24 21:07:23.532144] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/9bdd01b8b5f546ce04b25ce7d68e3ace.socket failed
>> (Invalid argument)
>> [2014-11-24 21:07:26.532426] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/9bdd01b8b5f546ce04b25ce7d68e3ace.socket failed
>> (Invalid argument)
>> [2014-11-24 21:07:29.532721] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/9bdd01b8b5f546ce04b25ce7d68e3ace.socket failed
>> (Invalid argument)
>> [2014-11-24 21:07:32.533014] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/9bdd01b8b5f546ce04b25ce7d68e3ace.socket failed
>> (Invalid argument)
>> [2014-11-24 21:07:35.533368] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/9bdd01b8b5f546ce04b25ce7d68e3ace.socket failed
>> (Invalid argument)
>> [2014-11-24 21:07:38.533669] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/9bdd01b8b5f546ce04b25ce7d68e3ace.socket failed
>> (Invalid argument)
>> [2014-11-24 21:07:41.533998] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/9bdd01b8b5f546ce04b25ce7d68e3ace.socket failed
>> (Invalid argument)
>> [2014-11-24 21:07:44.534330] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/9bdd01b8b5f546ce04b25ce7d68e3ace.socket failed
>> (Invalid argument)
>> [2014-11-24 21:07:47.534668] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/9bdd01b8b5f546ce04b25ce7d68e3ace.socket failed
>> (Invalid argument)
>> [2014-11-24 21:07:50.534984] W [socket.c:611:__socket_rwv] 0-management:
>> readv on /var/run/9bdd01b8b5f546ce04b25ce7d68e3ace.socket failed
>> (Invalid argument)
>>
>> (The NFS message from the first server is interesting, since all of my
>> volumes explicitly declare "nfs.disable: true".)
>>
>>
>> I can release the lock by `service glusterd restart`, but that seems
>> sub-optimal. It's terribly manual; and eventually some other lock will
>> stick, reproducing this situation.
>>
>>
>> How can I diagnose what is locking, and what can I do to remedy this?
>>
>> Thanks!
>> Scott
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
More information about the Gluster-users
mailing list