[Gluster-users] Gluster client v3.0.4-1 appears to be terminating and then restarting itself.
Elbert Lai
theelbster at gmail.com
Mon Apr 11 17:23:18 UTC 2011
My server and clients are both using glusterfs v3.0.4-1 in this
environment, and it looks like on some hosts, the client is restarting
itself at regular intervals. I've found these occasional log lines in the
client log. It looks like the client is hitting an error and restarting.
It's causing a problem, because after this occurs, there is one more client
process running. The previous client process doesn't exit, and a new client
process gets started. At this point, it doesn't look like it causes a real
production problem, but it's setting off my nagios alerts, because they
monitor the # of procs that should be running.
[2011-04-06 01:22:23] N [fuse-bridge.c:3140:fuse_thread_proc]
glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
[2011-04-11 01:18:33] N [fuse-bridge.c:3140:fuse_thread_proc]
glusterfs-fuse: terminating upon getting EBADF when reading /dev/fuse
However, these lines don't always accompany a client restart. Does anyone
know if this has been fixed in the current version of gluster? Is it even a
bug? I've done some searching, but I haven't found anything conclusive.
Alternately, is it something that can be fixed via configuration?
I've included my client config file in case it helps.
# Gluster Client configuration /etc/glusterfs/glusterfs-client.vol
volume remote1
type protocol/client
option transport-type tcp
option transport.socket.nodelay on
option remote-host host1
option remote-subvolume brick
end-volume
volume remote2
type protocol/client
option transport-type tcp
option transport.socket.nodelay on
option remote-host host2
option remote-subvolume brick
end-volume
volume remote3
type protocol/client
option transport-type tcp
option transport.socket.nodelay on
option remote-host host3
option remote-subvolume brick
end-volume
volume remote4
type protocol/client
option transport-type tcp
option transport.socket.nodelay on
option remote-host host4
option remote-subvolume brick
end-volume
volume remote5
type protocol/client
option transport-type tcp
option transport.socket.nodelay on
option remote-host host5
option remote-subvolume brick
end-volume
volume remote6
type protocol/client
option transport-type tcp
option transport.socket.nodelay on
option remote-host host6
option remote-subvolume brick
end-volume
volume remote7
type protocol/client
option transport-type tcp
option transport.socket.nodelay on
option remote-host host7
option remote-subvolume brick
end-volume
volume remote8
type protocol/client
option transport-type tcp
option transport.socket.nodelay on
option remote-host host8
option remote-subvolume brick
end-volume
volume replicate1
type cluster/replicate
subvolumes remote1 remote2
end-volume
volume replicate2
type cluster/replicate
subvolumes remote3 remote4
end-volume
volume replicate3
type cluster/replicate
subvolumes remote5 remote6
end-volume
volume replicate4
type cluster/replicate
subvolumes remote7 remote8
end-volume
volume distribute
type cluster/distribute
subvolumes replicate1 replicate2 replicate3 replicate4
end-volume
volume writebehind
type performance/write-behind
#option aggregate-size 128KB
#option window-size 1MB
option cache-size 512MB
subvolumes distribute
end-volume
volume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind
end-volume
volume stat-prefetch
type performance/stat-prefetch
subvolumes cache
end-volume
volume readahead
type performance/read-ahead
option page-count 8
subvolumes stat-prefetch
end-volume
Thanks,
-elb-
More information about the Gluster-users
mailing list