[Gluster-users] (3.2.4-1) many "Stopping gluster nfsd running in pid: <pid>" in log

Tomoaki Sato tsato at valinux.co.jp
Fri Oct 14 09:19:43 UTC 2011


not 'exportfs -f' but 'exportfs -r'. sorry.

(2011/10/14 18:17), Tomoaki Sato wrote:
> Krishna,
>
> Thank you for your comments.
> I've changed the script not to repeat gluster volume set command with same arguments.
> Do you have any plans to make gluster restart-free like 'exportfs -f' of nfs.
>
> tomo sato
>
> (2011/10/14 18:03), Krishna Srinivas wrote:
>> Hi Tomo Sato,
>>
>> Using gluster volume set command will restart the nfs server, hence
>> you should change the script so that it does not restart the nfs
>> server too often.
>>
>> You can consult with the person who installed the script as it is not
>> a part of gluster installed scripts.
>>
>> Krishna
>>
>> On Wed, Oct 12, 2011 at 8:12 AM, Tomoaki Sato<tsato at valinux.co.jp> wrote:
>>> Hi,
>>>
>>> I found a local shell script issues 'gluster volume set<volume>
>>> nfs.rpc-auth-allow<IP addr list>' and 'gluster volume set<volume>
>>> nfs.rpc-auth-reject<IP addr list>' to tell latest allow/reject lists in
>>> every 3 minutes.
>>> Since the script does not care the changes between current lists and last,
>>> nfs servers restart every time.
>>> Should I change the script to omit duplicated 'gluster volume set' ?
>>>
>>> Best,
>>> tomo sato
>>> (2011/10/11 18:13), Tomoaki Sato wrote:
>>>>
>>>> Hi,
>>>>
>>>> I have reproduce the issue on 3.2.3-1 too. The issue I reported was not
>>>> 3.2.4-1 specific. sorry for my fault.
>>>> It seems that 'gluster volume set<key> <value>' command kill nfs servers
>>>> and re-invoke new nfs servers and yet another stuff periodically kill nfs
>>>> servers and re-invokde new nfs servers like 'gluster volume set<key>
>>>> <value>' command do. Eventually 'showmount -e' fails due to absence of nfs
>>>> servers.
>>>> How can I suppress the restarting of nfs servers ?
>>>>
>>>> Best,
>>>> tomo sato
>>>>
>>>> (2011/10/07 16:35), Tomoaki Sato wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> In my environment, showmount -e fail to obtain exported directories
>>>>> provisionally since updating glusterfs 3.2.3 to 3.2.4.
>>>>> And following messages are repeatedly appeared in
>>>>> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log file.
>>>>> I'm not sure if this is related.
>>>>>
>>>>> # rpm -qa | grep glusterfs
>>>>> glusterfs-fuse-3.2.4-1
>>>>> glusterfs-core-3.2.4-1
>>>>> # tail -f /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
>>>>>
>>>>> (after quiet 190 second)
>>>>>
>>>>> [2011-10-07 16:11:37.997267] I [glusterd-utils.c:243:glusterd_lock]
>>>>> 0-glusterd: Cluster lock held by 887e7be3-f9c2-4ed7-8cbb-6b622c54322e
>>>>> [2011-10-07 16:11:37.997292] I
>>>>> [glusterd-handler.c:420:glusterd_op_txn_begin] 0-glusterd: Acquired local
>>>>> lock
>>>>> [2011-10-07 16:11:38.4289] I
>>>>> [glusterd-op-sm.c:6543:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req
>>>>> to 0 peers
>>>>> [2011-10-07 16:11:38.14077] I
>>>>> [glusterd-utils.c:907:glusterd_service_stop] 0-: Stopping gluster nfsd
>>>>> running in pid: 31726
>>>>> [2011-10-07 16:11:39.18531] I
>>>>> [glusterd-utils.c:2295:glusterd_nfs_pmap_deregister] 0-: De-registered
>>>>> MOUNTV3 successfully
>>>>> [2011-10-07 16:11:39.18745] I
>>>>> [glusterd-utils.c:2300:glusterd_nfs_pmap_deregister] 0-: De-registered
>>>>> MOUNTV1 successfully
>>>>> [2011-10-07 16:11:39.18906] I
>>>>> [glusterd-utils.c:2305:glusterd_nfs_pmap_deregister] 0-: De-registered NFSV3
>>>>> successfully
>>>>> [2011-10-07 16:11:39.138996] I
>>>>> [glusterd-op-sm.c:6660:glusterd_op_ac_send_commit_op] 0-glusterd: Sent op
>>>>> req to 0 peers
>>>>> [2011-10-07 16:11:39.139142] I
>>>>> [glusterd-op-sm.c:7077:glusterd_op_txn_complete] 0-glusterd: Cleared local
>>>>> lock
>>>>> [2011-10-07 16:11:39.141469] W
>>>>> [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading
>>>>> from socket failed. Error (Transport endpoint is not connected), peer
>>>>> (127.0.0.1:1023)
>>>>> [2011-10-07 16:11:39.163659] W
>>>>> [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading
>>>>> from socket failed. Error (Transport endpoint is not connected), peer
>>>>> (192.168.1.149:1021)
>>>>> [2011-10-07 16:11:39.337859] I [glusterd-utils.c:243:glusterd_lock]
>>>>> 0-glusterd: Cluster lock held by 887e7be3-f9c2-4ed7-8cbb-6b622c54322e
>>>>> [2011-10-07 16:11:39.337898] I
>>>>> [glusterd-handler.c:420:glusterd_op_txn_begin] 0-glusterd: Acquired local
>>>>> lock
>>>>> [2011-10-07 16:11:39.345983] I
>>>>> [glusterd-op-sm.c:6543:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req
>>>>> to 0 peers
>>>>> [2011-10-07 16:11:39.356967] I
>>>>> [glusterd-utils.c:907:glusterd_service_stop] 0-: Stopping gluster nfsd
>>>>> running in pid: 31946
>>>>> [2011-10-07 16:11:40.358624] I
>>>>> [glusterd-utils.c:2295:glusterd_nfs_pmap_deregister] 0-: De-registered
>>>>> MOUNTV3 successfully
>>>>> [2011-10-07 16:11:40.358842] I
>>>>> [glusterd-utils.c:2300:glusterd_nfs_pmap_deregister] 0-: De-registered
>>>>> MOUNTV1 successfully
>>>>> [2011-10-07 16:11:40.359057] I
>>>>> [glusterd-utils.c:2305:glusterd_nfs_pmap_deregister] 0-: De-registered NFSV3
>>>>> successfully
>>>>> [2011-10-07 16:11:40.470900] I
>>>>> [glusterd-op-sm.c:6660:glusterd_op_ac_send_commit_op] 0-glusterd: Sent op
>>>>> req to 0 peers
>>>>> [2011-10-07 16:11:40.471014] I
>>>>> [glusterd-op-sm.c:7077:glusterd_op_txn_complete] 0-glusterd: Cleared local
>>>>> lock
>>>>> [2011-10-07 16:11:40.473012] W
>>>>> [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading
>>>>> from socket failed. Error (Transport endpoint is not connected), peer
>>>>> (127.0.0.1:1020)
>>>>> [2011-10-07 16:11:40.513669] W
>>>>> [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading
>>>>> from socket failed. Error (Transport endpoint is not connected), peer
>>>>> (192.168.1.149:1019)
>>>>>
>>>>> (after quiet 190 second)
>>>>>
>>>>> [2011-10-07 16:14:51.321590] I [glusterd-utils.c:243:glusterd_lock]
>>>>> 0-glusterd: Cluster lock held by 887e7be3-f9c2-4ed7-8cbb-6b622c54322e
>>>>> [2011-10-07 16:14:51.321615] I
>>>>> [glusterd-handler.c:420:glusterd_op_txn_begin] 0-glusterd: Acquired local
>>>>> lock
>>>>> [2011-10-07 16:14:51.328047] I
>>>>> [glusterd-op-sm.c:6543:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req
>>>>> to 0 peers
>>>>> [2011-10-07 16:14:51.344550] I
>>>>> [glusterd-utils.c:907:glusterd_service_stop] 0-: Stopping gluster nfsd
>>>>> running in pid: 31971
>>>>> [2011-10-07 16:14:52.347314] I
>>>>> [glusterd-utils.c:2295:glusterd_nfs_pmap_deregister] 0-: De-registered
>>>>> MOUNTV3 successfully
>>>>> [2011-10-07 16:14:52.347597] I
>>>>> [glusterd-utils.c:2300:glusterd_nfs_pmap_deregister] 0-: De-registered
>>>>> MOUNTV1 successfully
>>>>> [2011-10-07 16:14:52.347796] I
>>>>> [glusterd-utils.c:2305:glusterd_nfs_pmap_deregister] 0-: De-registered NFSV3
>>>>> successfully
>>>>> [2011-10-07 16:14:52.463885] I
>>>>> [glusterd-op-sm.c:6660:glusterd_op_ac_send_commit_op] 0-glusterd: Sent op
>>>>> req to 0 peers
>>>>> [2011-10-07 16:14:52.464044] I
>>>>> [glusterd-op-sm.c:7077:glusterd_op_txn_complete] 0-glusterd: Cleared local
>>>>> lock
>>>>> [2011-10-07 16:14:52.466576] W
>>>>> [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading
>>>>> from socket failed. Error (Transport endpoint is not connected), peer
>>>>> (127.0.0.1:1021)
>>>>> [2011-10-07 16:14:52.492851] W
>>>>> [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading
>>>>> from socket failed. Error (Transport endpoint is not connected), peer
>>>>> (192.168.1.149:1020)
>>>>> [2011-10-07 16:14:52.680218] I [glusterd-utils.c:243:glusterd_lock]
>>>>> 0-glusterd: Cluster lock held by 887e7be3-f9c2-4ed7-8cbb-6b622c54322e
>>>>> [2011-10-07 16:14:52.680243] I
>>>>> [glusterd-handler.c:420:glusterd_op_txn_begin] 0-glusterd: Acquired local
>>>>> lock
>>>>> [2011-10-07 16:14:52.686914] I
>>>>> [glusterd-op-sm.c:6543:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req
>>>>> to 0 peers
>>>>> [2011-10-07 16:14:52.695690] I
>>>>> [glusterd-utils.c:907:glusterd_service_stop] 0-: Stopping gluster nfsd
>>>>> running in pid: 32242
>>>>> [2011-10-07 16:14:53.699548] I
>>>>> [glusterd-utils.c:2295:glusterd_nfs_pmap_deregister] 0-: De-registered
>>>>> MOUNTV3 successfully
>>>>> [2011-10-07 16:14:53.699743] I
>>>>> [glusterd-utils.c:2300:glusterd_nfs_pmap_deregister] 0-: De-registered
>>>>> MOUNTV1 successfully
>>>>> [2011-10-07 16:14:53.699899] I
>>>>> [glusterd-utils.c:2305:glusterd_nfs_pmap_deregister] 0-: De-registered NFSV3
>>>>> successfully
>>>>> [2011-10-07 16:14:53.810978] I
>>>>> [glusterd-op-sm.c:6660:glusterd_op_ac_send_commit_op] 0-glusterd: Sent op
>>>>> req to 0 peers
>>>>> [2011-10-07 16:14:53.811164] I
>>>>> [glusterd-op-sm.c:7077:glusterd_op_txn_complete] 0-glusterd: Cleared local
>>>>> lock
>>>>> [2011-10-07 16:14:53.813282] W
>>>>> [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading
>>>>> from socket failed. Error (Transport endpoint is not connected), peer
>>>>> (127.0.0.1:1019)
>>>>> [2011-10-07 16:14:53.840204] W
>>>>> [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading
>>>>> from socket failed. Error (Transport endpoint is not connected), peer
>>>>> (192.168.1.149:1018)
>>>>>
>>>>> Best,
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list