[Gluster-users] cluster/afr issue with "option volume-filename.default" on server volfile
Bernard Li
bernard at vanhpc.org
Tue Jun 22 18:55:58 UTC 2010
Hi all:
I have a simple cluster/afr setup but am having trouble mounting a
volume by retrieving the default volfile from the server via the
option "volume-filename.default".
Here are the volfiles:
[server]
volume posix
type storage/posix
option directory /export/gluster
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume brick
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow *
option listen-port 6996
option volume-filename.default /etc/glusterfs/glusterfs.vol
subvolumes brick
end-volume
[client]
volume gluster1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.10
option remote-port 6996
option remote-subvolume brick
end-volume
volume gluster2
type protocol/client
option transport-type tcp
option remote-host 192.168.1.11
option remote-port 6996
option remote-subvolume brick
end-volume
volume gluster
type cluster/afr
subvolumes gluster1 gluster2
end-volume
volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes gluster
end-volume
volume io-cache
type performance/io-cache
option cache-size 1GB
subvolumes writebehind
end-volume
When I mount via `glusterfs -s 192.168.1.10 /mnt/glusterfs` (on
192.168.1.10), I get the following in the logs:
[2010-06-22 11:37:55] N [glusterfsd.c:1408:main] glusterfs: Successfully started
[2010-06-22 11:37:55] N [client-protocol.c:6288:client_setvolume_cbk]
gluster1: Connected to 192.168.1.10:6996, attached to remote volume
'brick'.
[2010-06-22 11:37:55] N [afr.c:2636:notify] gluster: Subvolume
'gluster1' came back up; going online.
[2010-06-22 11:37:55] N [client-protocol.c:6288:client_setvolume_cbk]
gluster1: Connected to 192.168.1.10:6996, attached to remote volume
'brick'.
[2010-06-22 11:37:55] N [afr.c:2636:notify] gluster: Subvolume
'gluster1' came back up; going online.
[2010-06-22 11:37:55] N [fuse-bridge.c:2953:fuse_init] glusterfs-fuse:
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.10
Note it only mentioned "gluster1" and nothing on "gluster2".
If I touch a file in /mnt/glusterfs, on the backend, the file only
shows up on gluster1 and not gluster2
When I mount via `glusterfs -s 192.168.1.11 /mnt/glusterfs` (on
192.168.1.10), I get the following in the logs:
[2010-06-22 11:46:24] N [glusterfsd.c:1408:main] glusterfs: Successfully started
[2010-06-22 11:46:24] N [fuse-bridge.c:2953:fuse_init] glusterfs-fuse:
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.10
[2010-06-22 11:46:30] W [fuse-bridge.c:725:fuse_attr_cbk]
glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not
connected)
When I mount directly via the volfile, as in `glusterfs -f
/etc/glusterfs/glusterfs.vol /mnt/glusterfs` (on 192.168.1.10), then
everything works as expected. Here's the log:
[2010-06-22 11:39:47] N [glusterfsd.c:1408:main] glusterfs: Successfully started
[2010-06-22 11:39:47] N [client-protocol.c:6288:client_setvolume_cbk]
gluster1: Connected to 192.168.1.10:6996, attached to remote volume
'brick'.
[2010-06-22 11:39:47] N [afr.c:2636:notify] gluster: Subvolume
'gluster1' came back up; going online.
[2010-06-22 11:39:47] N [client-protocol.c:6288:client_setvolume_cbk]
gluster1: Connected to 192.168.1.10:6996, attached to remote volume
'brick'.
[2010-06-22 11:39:47] N [afr.c:2636:notify] gluster: Subvolume
'gluster1' came back up; going online.
[2010-06-22 11:39:47] N [fuse-bridge.c:2953:fuse_init] glusterfs-fuse:
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.10
[2010-06-22 11:39:47] N [client-protocol.c:6288:client_setvolume_cbk]
gluster2: Connected to 192.168.1.11:6996, attached to remote volume
'brick'.
[2010-06-22 11:39:47] N [client-protocol.c:6288:client_setvolume_cbk]
gluster2: Connected to 192.168.1.11:6996, attached to remote volume
'brick'.
[2010-06-22 11:41:03] N [fuse-bridge.c:3143:fuse_thread_proc]
glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
Is this a known issue? Or am I doing something unsupported?
Thanks,
Bernard
More information about the Gluster-users
mailing list