[Gluster-users] How to configure with mixed LAN addresses?

Whit Blauvelt whit.gluster at transpect.com
Tue Apr 19 16:07:09 UTC 2011


Hi,

I didn't title this well, so let me phrase that better. With 2-system
replicated mirror, there is a speed and reliability advantage in joining
those two systems with a crossover cable and putting those two nics in their
own private address space. Gluster-as-server is happy to do that. 

Gluster-as-client isn't happy in this arrangement, when the client is coming
in from the normal LAN addresses rather than the private addresses the
mirroring is using. This appears to be because the default behavior is for
the client to download the volume file from the servers, which then contains
the private IPs, which aren't available from the client's perspective.

I knon in past versions of gluster the choice of the client downloading that
file, or having its own copy, was configurable. It's not clear from the
current stripped-down documentation whether that's still the case. Or if it
is, I don't find discussion of how to configure the client to work right in
this sort of setup. As it is, the client just hangs on the initial mount
request, and sits there consuming 100% of cpu until killed. Of course,
better behavior would be for the client to throw an error and quit in a
polite way - hanging at 100% cpu is just wrong.

The error the client does throw is "dangling volume. check volfile". When I
search on that all I find is discussions of how it's often a bogus error.
Maybe it's real here? I can't find the instructions on how to fix it if so,
or even a definition of "dangling volume."

Happily, using an NFS client works just fine in this arrangement. Maybe
gluster just isn't designed for two mirrors to share a private LAN, but not
having a switch between them removes a point of failure, and is faster. Even
where there's a private LAN for storage, it should be better to have yet
another address space for a crossover between mirrors, so I'm hoping there's
a way to get remote clients happy with this setup.

Best,
Whit



More information about the Gluster-users mailing list