[Gluster-users] UCARP with NFS

anthony garnier sokar6012 at hotmail.com
Fri Sep 9 14:41:06 UTC 2011


I found the problem this morning !
It's because TCP connection are not reseted on Master Server and when the client come back on master, they enter both in a "TCP DUP ACK" storm. Need to kill all gluster process when Master comes up.
More info : https://bugzilla.redhat.com/show_bug.cgi?id=369991#c31

Anthony


> From: mueller at tropenklinik.de
> To: sokar6012 at hotmail.com; whit.gluster at transpect.com
> CC: gluster-users at gluster.org
> Subject: AW: [Gluster-users] UCARP with NFS
> Date: Thu, 8 Sep 2011 16:13:24 +0200
> 
> Cmd on slave : 
> usr/sbin/ucarp -z -B -M -b 1 -i bond0:0
> 
> Did you try "-b 7" at your cmd start. This solved for me the things in
> another configuration
> 
> 
> 
> 
> EDV Daniel Müller
> 
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen 
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: mueller at tropenklinik.de
> Internet: www.tropenklinik.de 
> 
> Von: gluster-users-bounces at gluster.org
> [mailto:gluster-users-bounces at gluster.org] Im Auftrag von anthony garnier
> Gesendet: Donnerstag, 8. September 2011 15:55
> An: whit.gluster at transpect.com
> Cc: gluster-users at gluster.org
> Betreff: Re: [Gluster-users] UCARP with NFS
> 
> Whit,
> 
> Here is my conf file : 
> #
> # Location of the ucarp pid file
> UCARP_PIDFILE=/var/run/ucarp0.pid
> 
> # Define if this host is the prefered MASTER ( this aadd or remove the -P
> option)
> UCARP_MASTER="yes"
> 
> #
> # ucarp base, Interval monitoring time 
> #lower number will be perfered master
> # set to same to have master stay alive as long as possible
> UCARP_BASE=1
> 
> #Priority [0-255]
> #lower number will be perfered master
> ADVSKEW=0
> 
> 
> #
> # Interface for Ipaddress
> INTERFACE=bond0:0
> 
> #
> # Instance id
> # any number from 1 to 255
> # Master and Backup need to be the same
> INSTANCE_ID=42
> 
> #
> # Password so servers can trust who they are talking to
> PASSWORD=glusterfs
> 
> #
> # The Application Address that will failover
> VIRTUAL_ADDRESS=10.68.217.3
> VIRTUAL_BROADCAST=10.68.217.255
> VIRTUAL_NETMASK=255.255.255.0
> #
> 
> #Script for configuring interface
> UPSCRIPT=/etc/ucarp/script/vip-up.sh
> DOWNSCRIPT=/etc/ucarp/script/vip-down.sh
> 
> # The Maintanence Address of the local machine
> SOURCE_ADDRESS=10.68.217.85
> 
> 
> Cmd on master : 
> /usr/sbin/ucarp -z -B -P -b 1 -i bond0:0 -v 42 -p glusterfs -k 0 -a
> 10.68.217.3 -s 10.68.217.85 --upscript=/etc/ucarp/script/vip-up.sh
> --downscript=/etc/ucarp/script/vip-down.sh
> 
> Cmd on slave : 
> usr/sbin/ucarp -z -B -M -b 1 -i bond0:0 \ -v 42 -p glusterfs -k 50 -a
> 10.68.217.3 -s 10.68.217.86 --upscript=/etc/ucarp/script/vip-up.sh
> --downscript=/etc/ucarp/script/vip-down.sh
> 
> 
> To me, to have a prefered master is necessary because I'm using RR DNS and I
> want to do a kind of "active/active" failover.I'll explain the whole idea : 
> 
> SERVER 1<---------------> SERVER 2
> VIP1                             VIP2
> 
> When I access the URL glusterfs.preprod.inetpsa.com, RRDNS gives me one of
> the VIP(load balancing). The main problem here  is if I use only RRDNS, if a
> server goes down the client currently binded on this server will fail to. So
> to avoid that I need a VIP failover. 
> By this way, If a server goes down, all the client on this server will be
> binded on the other one. Because I want loadbalacing, I need a prefered
> master, so by default need that VIP 1 stay on server 1 and VIP 2 stay on
> server 2.
> Currently I trying to make it works with one VIP only.
> 
> 
> Anthony
> 
> > Date: Thu, 8 Sep 2011 09:32:59 -0400
> > From: whit.gluster at transpect.com
> > To: sokar6012 at hotmail.com
> > CC: gluster-users at gluster.org
> > Subject: Re: [Gluster-users] UCARP with NFS
> > 
> > On Thu, Sep 08, 2011 at 01:02:41PM +0000, anthony garnier wrote:
> > 
> > > I got a client mounted on the VIP, when the Master fall, the client
> switch
> > > automaticaly on the Slave with almost no delay, it works like a charm.
> But when
> > > the Master come back up, the mount point on the client freeze.
> > > I've done a monitoring with tcpdump, when the master came up, the client
> send
> > > paquets on the master but the master seems to not establish the TCP
> connection.
> > 
> > Anthony,
> > 
> > Your UCARP command line choices and scripts would be worth looking at
> here.
> > There are different UCARP behavior options for when the master comes back
> > up. If the initial failover works fine, it may be that you'll have better
> > results if you don't have a preferred master. That is, you can either have
> > UCARP set so that the slave relinquishes the IP back to the master when
> the
> > master comes back up, or you can have UCARP set so that the slave becomes
> > the new master, until such time as the new master goes down, in which case
> > the former master becomes master again.
> > 
> > If you're doing it the first way, there may be a brief overlap, where both
> > systems claim the VIP. That may be where your mount is failing. By doing
> it
> > the second way, where the VIP is held by whichever system has it until
> that
> > system actually goes down, there's no overlap. There shouldn't be a
> reason,
> > in the Gluster context, to care which system is master, is there?
> > 
> > Whit
> 
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110909/18cca141/attachment.html>


More information about the Gluster-users mailing list