[Gluster-users] Gluster 3.2.0 and ucarp not working
Joshua Baker-LePain
jlb17 at duke.edu
Wed Jun 8 21:20:45 UTC 2011
On Wed, 8 Jun 2011 at 4:44pm, Joe Landman wrote
> On 06/08/2011 04:37 PM, Joshua Baker-LePain wrote:
>
>>> BTW: You need a virtual ip for ucarp
>>
>> As I said, that's what I'm doing now -- using the virtual IP address
>> managed by ucarp in my fstab line. But Craig Carl from Gluster told the
>> OP in this thread specifically to mount using the real IP address of a
>> server when using the GlusterFS client, *not* to use the ucarp VIP.
>>
>> So I'm officially confused.
>
> GlusterFS client side gets its config from the server, and makes connections
> to each server. Any of the GlusterFS servers may be used for the mount, and
> the client will connect to all of them. If one of the servers goes away, and
> you have a replicated or HA setup, you shouldn't see any client side issues.
Hrm, apparently I'm not making myself clear. I fully understand the
redundancy of a replicated glusterfs volume mounted on a client. After
the mount, the client will not see any issues unless both members of a
replica pair (or all 4 members of a replica quad, etc) go down.
My concern is at mount time. Mounting via the glusterfs client (at the
command line or via fstab) requires a single IP address. That server is
contacted to get the volume config (which inclues the IP addresses of the
rest of the servers). If that IP address is a regular IP address that
points at a single server and that server is down *at client mount time*,
then the mount will fail.
I have setup ucarp for the sole purpose of using the ucarp managed VIP in
my fstab lines, so that mounts will succeed even if some of the servers
are down. All the "gluster" commands to create the volumes were done
using real IP addresses. Does Craig Carl's advice not to use ucarp with
the native glusterfs client apply in my siuation?
> ucarp would be needed for the NFS side of the equation. round robin DNS is
> useful in both cases.
Again, I don't use DNS for my cluster, so that solution is out.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
More information about the Gluster-users
mailing list