[Gluster-users] gluster nfs-ganesha enable fails and is driving me crazy

Marco Antonio Carcano mc at carcano.ch
Tue Dec 8 08:46:51 UTC 2015


Hi,

I hope someone could help me with this issue is driving me crazy: I'm 
trying to setup an high available NFS server using gluster and 
ganesha,it seems that all the required steps work right, except that 
when I issue on glstr02.carcano.local the command

gluster nfs-ganesha enable

I got the following output with error

Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the 
trusted pool. Do you still want to continue?
  (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha: failed: Commit failed on glstr01.carcano.local. Error: 
Failed to set up HA config for NFS-Ganesha. Please check the log file 
for details

and in log files

==> etc-glusterfs-glusterd.vol.log <==
[2015-12-07 20:42:43.888793] E [MSGID: 106062] 
[glusterd-op-sm.c:3647:glusterd_op_ac_lock] 0-management: Unable to 
acquire volname
[2015-12-07 20:42:44.244133] W [common-utils.c:1685:gf_string2boolean] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_op_stage_validate+0x143) 
[0x7fa3da411223] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_op_stage_set_ganesha+0xa8) 
[0x7fa3da45f8d8] 
-->/usr/lib64/libglusterfs.so.0(gf_string2boolean+0x157) 
[0x7fa3e59efde7] ) 0-management: argument invalid [Invalid argument]
[2015-12-07 20:42:44.428305] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already 
stopped
[2015-12-07 20:42:44.432079] I [MSGID: 106474] 
[glusterd-ganesha.c:403:check_host_list] 0-management: ganesha host 
found Hostname is glstr01.carcano.local
[2015-12-07 20:42:44.705564] I [MSGID: 106474] 
[glusterd-ganesha.c:349:is_ganesha_host] 0-management: ganesha host 
found Hostname is glstr01.carcano.local
[2015-12-07 20:42:48.525223] E [MSGID: 106470] 
[glusterd-ganesha.c:264:glusterd_op_set_ganesha] 0-management: Initial 
NFS-Ganesha set up failed
[2015-12-07 20:42:48.525320] E [MSGID: 106123] 
[glusterd-op-sm.c:5311:glusterd_op_ac_commit_op] 0-management: Commit of 
operation 'Volume (null)' failed: -1
[2015-12-07 20:42:48.541707] E [MSGID: 106062] 
[glusterd-op-sm.c:3710:glusterd_op_ac_unlock] 0-management: Unable to 
acquire volname
[2015-12-07 20:42:49.674994] I [MSGID: 106164] 
[glusterd-handshake.c:1248:__server_get_volume_info] 0-glusterd: 
Received get volume info req
[2015-12-07 20:42:44.705289] I [MSGID: 106474] 
[glusterd-ganesha.c:403:check_host_list] 0-management: ganesha host 
found Hostname is glstr01.carcano.local

I suppose the actual error is "E [MSGID: 106062] 
[glusterd-op-sm.c:3647:glusterd_op_ac_lock] 0-management: Unable to 
acquire volname"

This is really driving me crazy: if I start nfs-ganesha service alone, 
configured to use the gluster volume it works - if I issue on another 
client

showmount -e glstr01.carcano.local

it shows my "vol1" . that is the exported volume

I really don't know what am I failing

I'm using gluster 3.7.6-1 and nfs-ganesha-2.3.0-1 on CentOS 6.7. IPv6 is 
enabled and Network Manager as well as iptables and ip6tables are 
disabled. I tried with both selinux permissive and enforcing.

I installed the packages issuing:

yum install -y glusterfs-ganesha pacemaker pacemaker-cli pacemaker-libs 
corosync

here are the configuration files:

/etc/hosts
127.0.0.1    localhost.localdomain    localhost.localdomain 
localhost4    localhost4.localdomain4    localhost
::1    localhost.localdomain    localhost.localdomain localhost6 
localhost6.localdomain6    localhost
192.168.66.250 glstr01.carcano.local glstr01
192.168.66.251 glstr02.carcano.local glstr02
192.168.65.250 glstr01v.carcano.local
192.168.65.251 glstr02v.carcano.local

/etc/ganesha/ganesha-ha.conf

HA_NAME="ganesha-ha-360"
HA_VOL_SERVER="glstr01.carcano.local"
HA_CLUSTER_NODES="glstr01.carcano.local,glstr02.carcano.local"
VIP_server1="192.168.65.250"
VIP_server2="192.168.65.251"

/etc/ganesha/ganesha.conf

EXPORT
{
     Export_Id = 77;
     Path = /vol1;
     Pseudo = /vol1;
     Access_Type = RW;
     FSAL {
         Name = GLUSTER;
         Hostname = localhost;
         Volume = vol1;
     }
}

gluster volume info

Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: a899b299-a43b-4119-9411-4c68fd72a550
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glstr02:/var/lib/glusterd/ss_brick
Brick2: glstr01.carcano.local:/var/lib/glusterd/ss_brick
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
cluster.enable-shared-storage: enable

Volume Name: vol1
Type: Replicate
Volume ID: d5632acb-972c-4840-9dd2-527723312419
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glstr01:/gluster/disk1/contents
Brick2: glstr02:/gluster/disk1/contents
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
cluster.enable-shared-storage: enable

luster volume status
Status of volume: gluster_shared_storage
Gluster process                             TCP Port  RDMA Port Online  Pid
------------------------------------------------------------------------------ 

Brick glstr02:/var/lib/glusterd/ss_brick    49152     0 Y       1898
Brick glstr01.carcano.local:/var/lib/gluste
rd/ss_brick                                 49152     0 Y       1934
Self-heal Daemon on localhost               N/A       N/A Y 2125
Self-heal Daemon on glstr02                 N/A       N/A Y 2038

Task Status of Volume gluster_shared_storage
------------------------------------------------------------------------------ 

There are no active volume tasks

Status of volume: vol1
Gluster process                             TCP Port  RDMA Port Online  Pid
------------------------------------------------------------------------------ 

Brick glstr01:/gluster/disk1/contents       49153     0 Y       2094
Brick glstr02:/gluster/disk1/contents       49153     0 Y       1945
Self-heal Daemon on localhost               N/A       N/A Y 2125
Self-heal Daemon on glstr02                 N/A       N/A Y 2038

Task Status of Volume vol1
------------------------------------------------------------------------------ 

There are no active volume tasks

/usr/libexec/ganesha/ganesha-ha.sh --status
Error: cluster is not currently running on this node
Error: cluster is not currently running on this node
Error: cluster is not currently running on this node

Kind regards

Marco







More information about the Gluster-users mailing list