[Gluster-users] glusterfs, ganesh, and pcs rules

Hetz Ben Hamo hetz at hetz.biz
Sun Dec 24 09:33:16 UTC 2017


I checked, and I have it like this:

# Name of the HA cluster created.
# must be unique within the subnet
HA_NAME="ganesha-nfs"
#
# The gluster server from which to mount the shared data volume.
HA_VOL_SERVER="tlxdmz-nfs1"
#
# N.B. you may use short names or long names; you may not use IP addrs.
# Once you select one, stay with it as it will be mildly unpleasant to
# clean up if you switch later on. Ensure that all names - short and/or
# long - are in DNS or /etc/hosts on all machines in the cluster.
#
# The subset of nodes of the Gluster Trusted Pool that form the ganesha
# HA cluster. Hostname is specified.
HA_CLUSTER_NODES="tlxdmz-nfs1,tlxdmz-nfs2"
#HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..."
#
# Virtual IPs for each of the nodes specified above.
VIP_server1="10.X.X.181"
VIP_server2="10.X.X.182"

תודה,
*חץ בן חמו*
אתם מוזמנים לבקר בבלוג היעוץ <http://linvirtstor.net/> או בבלוג הפרטי שלי
<http://benhamo.org>

On Thu, Dec 21, 2017 at 3:47 PM, Renaud Fortier <
Renaud.Fortier at fsaa.ulaval.ca> wrote:

> Hi,
> In your ganesha-ha.conf do you have your virtual ip adresses set something
> like this :
>
> VIP_tlxdmz-nfs1="192.168.22.33"
> VIP_tlxdmz-nfs2="192.168.22.34"
>
> Renaud
>
> De : gluster-users-bounces at gluster.org [mailto:gluster-users-bounces@
> gluster.org] De la part de Hetz Ben Hamo
> Envoyé : 20 décembre 2017 04:35
> À : gluster-users at gluster.org
> Objet : [Gluster-users] glusterfs, ganesh, and pcs rules
>
> Hi,
>
> I've just created again the gluster with NFS ganesha. Glusterfs version 3.8
>
> When I run the command  gluster nfs-ganesha enable - it returns a success.
> However, looking at the pcs status, I see this:
>
> [root at tlxdmz-nfs1 ~]# pcs status
> Cluster name: ganesha-nfs
> Stack: corosync
> Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition
> with quorum
> Last updated: Wed Dec 20 09:20:44 2017
> Last change: Wed Dec 20 09:19:27 2017 by root via cibadmin on tlxdmz-nfs1
>
> 2 nodes configured
> 8 resources configured
>
> Online: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
>
> Full list of resources:
>
>  Clone Set: nfs_setup-clone [nfs_setup]
>      Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
>  Clone Set: nfs-mon-clone [nfs-mon]
>      Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
>  Clone Set: nfs-grace-clone [nfs-grace]
>      Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
>  tlxdmz-nfs1-cluster_ip-1       (ocf::heartbeat:IPaddr):        Stopped
>  tlxdmz-nfs2-cluster_ip-1       (ocf::heartbeat:IPaddr):        Stopped
>
> Failed Actions:
> * tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6):
> call=23, status=complete, exitreason='IP address (the ip parameter) is
> mandatory',
>     last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
> * tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6):
> call=27, status=complete, exitreason='IP address (the ip parameter) is
> mandatory',
>     last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
> * tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6):
> call=23, status=complete, exitreason='IP address (the ip parameter) is
> mandatory',
>     last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=24ms
> * tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6):
> call=27, status=complete, exitreason='IP address (the ip parameter) is
> mandatory',
>     last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=61ms
>
>
> Daemon Status:
>   corosync: active/disabled
>   pacemaker: active/disabled
>   pcsd: active/enabled
>
> Any suggestion how this can be fixed when enabling nfs-ganesha when
> invoking the above command or anything else that I can do to fixed the
> failed actions?
>
> Thanks
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171224/cb19c10b/attachment.html>


More information about the Gluster-users mailing list