[Gluster-users] nfs/server not loading
Jesse Caldwell
jesse.caldwell at Colorado.EDU
Thu Aug 19 19:36:26 UTC 2010
hi all,
i just built glusterfs-nfs_beta_rc10 on freebsd 8.1. i configured
glusterfs as follows:
./configure --disable-fuse-client --prefix=/usr/local/glusterfs
i also ran this on the source tree before building:
for file in $(find . -type f -exec grep -l EBADFD {} \;); do
sed -i -e 's/EBADFD/EBADF/g' ${file};
done
i used glusterfs-volgen to create some config files:
glusterfs-volgen -n export --raid 1 --nfs 10.0.0.10:/pool 10.0.0.20:/pool
glusterfsd will start up with 10.0.0.10-export-export.vol or
10.0.0.20-export-export.vol without any complaints. when i try to start
the nfs server, i get:
nfs1:~ $ sudo /usr/local/glusterfs/sbin/glusterfsd -f ./export-tcp.vol
Volume 'nfsxlator', line 31: type 'nfs/server' is not valid or not found on this machine
error in parsing volume file ./export-tcp.vol
exiting
the module is present, though, and truss shows that glusterfsd is finding
and opening it:
open("/usr/local/glusterfs/lib/glusterfs/nfs_beta_rc10/xlator/nfs/server.so",O_RDONLY,0106) = 7 (0x7)
nfs/server.so doesn't seem to be tragically mangled:
nfs1:~ $ ldd /usr/local/glusterfs/lib/glusterfs/nfs_beta_rc10/xlator/nfs/server.so
/usr/local/glusterfs/lib/glusterfs/nfs_beta_rc10/xlator/nfs/server.so:
libglrpcsvc.so.0 => /usr/local/glusterfs/lib/libglrpcsvc.so.0 (0x800c00000)
libglusterfs.so.0 => /usr/local/glusterfs/lib/libglusterfs.so.0 (0x800d17000)
libthr.so.3 => /lib/libthr.so.3 (0x800e6a000)
libc.so.7 => /lib/libc.so.7 (0x800647000)
is this a freebsd-ism, or did i screw up something obvious? the config
file i am using is obviously nothing special, but here it is:
nfs1:~ $ grep -v '^#' export-tcp.vol
volume 10.0.0.20-1
type protocol/client
option transport-type tcp
option remote-host 10.0.0.20
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume
volume 10.0.0.10-1
type protocol/client
option transport-type tcp
option remote-host 10.0.0.10
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume
volume mirror-0
type cluster/replicate
subvolumes 10.0.0.10-1 10.0.0.20-1
end-volume
volume nfsxlator
type nfs/server
subvolumes mirror-0
option rpc-auth.addr.mirror-0.allow *
end-volume
thanks,
jesse
More information about the Gluster-users
mailing list