[Gluster-devel] Multi-network support proposal

Jeff Darcy jdarcy at redhat.com
Sat Feb 14 20:19:17 UTC 2015

> It's really important for glusterfs not to require that the clients mount
> volumes using same subnet that is used by servers, and clearly your very
> general-purpose proposal could address that.  For example, in a site where
> non-glusterfs protocols are used, there are already good reasons for using
> multiple subnets, and we want glusterfs to be able to coexist with
> non-glusterfs protocols at a site.
> However, is there a simpler way to allow glusterfs clients to connect to
> servers through more than one subnet.  For example, suppose your Gluster
> volume subnet is and your "public" network used by glusterfs
> clients is, but one of the servers also has an interface on
> subnet .  So at the time that the volume is either created or
> bricks are added/removed:
> - determine what servers are actually in the volume
> - ask each server to return the subnet for each of its active network
> interfaces
> - determine set of subnets that are directly accessible to ALL the volume's
> servers
> - write a glusterfs volfile for each of these subnets and save it
> This process is O(N) where N is number of servers, but it only happens for
> volume creation or addition/removal of bricks, these events do not happen
> very often (do they?).  In example, and would have
> glusterfs volfiles, but would not.
> So now when a client connects, the server knows which subnet the request came
> through (getsockaddr), so it can just return the volfile for that subnet.
> If there is no volfile for that subnet, the client mount request is
> rejected..  But what about existing Gluster volumes?  When software is
> upgraded, we should provide a mechanism for triggering this volfile
> generation process to open up additional subnets for glusterfs clients.
> This proposal requires additional work to be done where volfiles are
> generated and where glusterfs mount processing is done, but does not require
> any additional configuration commands or extra user knowledge of Gluster.
> glusterfs clients can then use *any* subnet that is accessible to all the
> servers.

That does have the advantage of not requiring any special configuration,
and might work well enough for front-end traffic, but it has the
drawback of not giving any control over back-end traffic.  How do
*servers* choose which interfaces to use for NSR normal traffic,
reconciliation/self-heal, DHT rebalance, and so on?  Which network
should Ganesha/Samba servers use to communicate with bricks?  Even on
the front end, what happens when we do get around to adding per-subnet
access control or options?  For those kinds of use cases we need
networks to be explicit parts of our model, not implicit or inferred.
So maybe we need to reconcile the two approaches, and hope that the
combined result isn't too complicated.  I'm open to suggestions.

More information about the Gluster-devel mailing list