[Gluster-devel] How to get rid of NFS on 3.7.0?
Niels de Vos
ndevos at redhat.com
Fri May 15 15:07:38 UTC 2015
On Fri, May 15, 2015 at 02:41:05PM +0000, Emmanuel Dreyfus wrote:
> Hi
>
> Upgrading a production machine to 3.7.0, I face this problem:
> baril# gluster volume status
> Another transaction is in progress. Please try again after sometime.
>
> glusterd logs say:
> [2015-05-15 14:34:50.560251] E [glusterd-utils.c:164:glusterd_lock]
> 0-management: Unable to get lock for uuid:
> 85eb78cd-8ffa-49ca-b3e7-d5030bc3124d, lock held by:
> 85eb78cd-8ffa-49ca-b3e7-d5030bc3124d
>
> I am not sure how I can discover who is 85eb78cd-8ffa-49ca-b3e7-d5030bc3124d
> but I also have a lot of this filling my logs:
>
> [2015-05-15 14:39:11.488984] W [socket.c:642:__socket_rwv] 0-nfs:
> readv on /var/run/gluster/bc6a69125824a8fbb766577137d102d6.socket
> failed (No message available)
> [2015-05-15 14:39:14.510314] W [socket.c:3059:socket_connect] 0-nfs:
> Ignore failed connection attempt on
> /var/run/gluster/bc6a69125824a8fbb766577137d102d6.socket,
> (No such file or directory)
>
> The volume has nfs.disable set, but how do I really get rid of it?
nfs.disable should be set for all volumes on all your Gluster servers.
If that is indeed done, you should be able to kill the NFS-server.
# cat /var/lib/glusterd/nfs/run/nfs.pid | xargs kill
I do not know why the NFS-server would be running when nfs.disable is
set. Maybe glusterd failed to communicate with the NFS-server so that it
never received the instruction to shut down?
HTH,
Niels
More information about the Gluster-devel
mailing list