[Gluster-users] nfs-ganesha HA with arbiter volume
Tiemen Ruiten
t.ruiten at rdmedia.com
Mon Sep 21 08:26:57 UTC 2015
Hello Soumya, Kaleb, list,
This Friday I created the gluster_shared_storage volume manually, I just
tried it with the command you supplied, but both have the same result:
from etc-glusterfs-glusterd.vol.log on the node where I issued the command:
[2015-09-21 07:59:47.756845] I [MSGID: 106474]
[glusterd-ganesha.c:403:check_host_list] 0-management: ganesha host found
Hostname is cobalt
[2015-09-21 07:59:48.071755] I [MSGID: 106474]
[glusterd-ganesha.c:349:is_ganesha_host] 0-management: ganesha host found
Hostname is cobalt
[2015-09-21 07:59:48.653879] E [MSGID: 106470]
[glusterd-ganesha.c:264:glusterd_op_set_ganesha] 0-management: Initial
NFS-Ganesha set up failed
[2015-09-21 07:59:48.653912] E [MSGID: 106123]
[glusterd-syncop.c:1404:gd_commit_op_phase] 0-management: Commit of
operation 'Volume (null)' failed on localhost : Failed to set up HA config
for NFS-Ganesha. Please check the log file for details
[2015-09-21 07:59:45.402458] I [MSGID: 106006]
[glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management: nfs
has disconnected from glusterd.
[2015-09-21 07:59:48.071578] I [MSGID: 106474]
[glusterd-ganesha.c:403:check_host_list] 0-management: ganesha host found
Hostname is cobalt
from etc-glusterfs-glusterd.vol.log on the other node:
[2015-09-21 08:12:50.111877] E [MSGID: 106062]
[glusterd-op-sm.c:3698:glusterd_op_ac_unlock] 0-management: Unable to
acquire volname
[2015-09-21 08:14:50.548087] E [MSGID: 106062]
[glusterd-op-sm.c:3635:glusterd_op_ac_lock] 0-management: Unable to acquire
volname
[2015-09-21 08:14:50.654746] I [MSGID: 106132]
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already
stopped
[2015-09-21 08:14:50.655095] I [MSGID: 106474]
[glusterd-ganesha.c:403:check_host_list] 0-management: ganesha host found
Hostname is cobalt
[2015-09-21 08:14:51.287156] E [MSGID: 106062]
[glusterd-op-sm.c:3698:glusterd_op_ac_unlock] 0-management: Unable to
acquire volname
from etc-glusterfs-glusterd.vol.log on the arbiter node:
[2015-09-21 08:18:50.934713] E [MSGID: 101075]
[common-utils.c:3127:gf_is_local_addr] 0-management: error in getaddrinfo:
Name or service not known
[2015-09-21 08:18:51.504694] E [MSGID: 106062]
[glusterd-op-sm.c:3698:glusterd_op_ac_unlock] 0-management: Unable to
acquire volname
I have put the hostnames of all servers in my /etc/hosts file, including
the arbiter node.
On 18 September 2015 at 16:52, Soumya Koduri <skoduri at redhat.com> wrote:
> Hi Tiemen,
>
> One of the pre-requisites before setting up nfs-ganesha HA is to create
> and mount shared_storage volume. Use below CLI for that
>
> "gluster volume set all cluster.enable-shared-storage enable"
>
> It shall create the volume and mount in all the nodes (including the
> arbiter node). Note this volume shall be mounted on all the nodes of the
> gluster storage pool (though in this case it may not be part of nfs-ganesha
> cluster).
>
> So instead of manually creating those directory paths, please use above
> CLI and try re-configuring the setup.
>
> Thanks,
> Soumya
>
> On 09/18/2015 07:29 PM, Tiemen Ruiten wrote:
>
>> Hello Kaleb,
>>
>> I don't:
>>
>> # Name of the HA cluster created.
>> # must be unique within the subnet
>> HA_NAME="rd-ganesha-ha"
>> #
>> # The gluster server from which to mount the shared data volume.
>> HA_VOL_SERVER="iron"
>> #
>> # N.B. you may use short names or long names; you may not use IP addrs.
>> # Once you select one, stay with it as it will be mildly unpleasant to
>> # clean up if you switch later on. Ensure that all names - short and/or
>> # long - are in DNS or /etc/hosts on all machines in the cluster.
>> #
>> # The subset of nodes of the Gluster Trusted Pool that form the ganesha
>> # HA cluster. Hostname is specified.
>> HA_CLUSTER_NODES="cobalt,iron"
>> #HA_CLUSTER_NODES="server1.lab.redhat.com
>> <http://server1.lab.redhat.com>,server2.lab.redhat.com
>> <http://server2.lab.redhat.com>,..."
>> #
>> # Virtual IPs for each of the nodes specified above.
>> VIP_server1="10.100.30.101"
>> VIP_server2="10.100.30.102"
>> #VIP_server1_lab_redhat_com="10.0.2.1"
>> #VIP_server2_lab_redhat_com="10.0.2.2"
>>
>> hosts cobalt & iron are the data nodes, the arbiter ip/hostname (neon)
>> isn't mentioned anywhere in this config file.
>>
>>
>> On 18 September 2015 at 15:56, Kaleb S. KEITHLEY <kkeithle at redhat.com
>> <mailto:kkeithle at redhat.com>> wrote:
>>
>> On 09/18/2015 09:46 AM, Tiemen Ruiten wrote:
>> > Hello,
>> >
>> > I have a Gluster cluster with a single replica 3, arbiter 1 volume
>> (so
>> > two nodes with actual data, one arbiter node). I would like to setup
>> > NFS-Ganesha HA for this volume but I'm having some difficulties.
>> >
>> > - I needed to create a directory /var/run/gluster/shared_storage
>> > manually on all nodes, or the command 'gluster nfs-ganesha enable
>> would
>> > fail with the following error:
>> > [2015-09-18 13:13:34.690416] E [MSGID: 106032]
>> > [glusterd-ganesha.c:708:pre_setup] 0-THIS->name: mkdir() failed on
>> path
>> > /var/run/gluster/shared_storage/nfs-ganesha, [No such file or
>> directory]
>> >
>> > - Then I found out that the command connects to the arbiter node as
>> > well, but obviously I don't want to set up NFS-Ganesha there. Is it
>> > actually possible to setup NFS-Ganesha HA with an arbiter node? If
>> it's
>> > possible, is there any documentation on how to do that?
>> >
>>
>> Please send the /etc/ganesha/ganesha-ha.conf file you're using.
>>
>> Probably you have included the arbiter in your HA config; that would
>> be
>> a mistake.
>>
>> --
>>
>> Kaleb
>>
>>
>>
>>
>> --
>> Tiemen Ruiten
>> Systems Engineer
>> R&D Media
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
--
Tiemen Ruiten
Systems Engineer
R&D Media
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150921/ba42d05a/attachment.html>
More information about the Gluster-users
mailing list