[Gluster-users] Offering a private backend Gluster volume to a public network.

Matthew Temple mht at mail.dfci.harvard.edu
Thu Nov 1 13:39:41 UTC 2012


Mark,

Finding the best practices isn't easy, sometimes.   It would be great to
discuss this with Harry if he's doing the same, or even something close.
What I'm doing must be commonplace in the nextgen sequencing world.

Environment (for now):   4 bricks, replicated and distributed,  gf1-1,
gf1-2, gf1-1r, gf1-2r on the public side, gf1-ib-1, gf1-ib-2.... etc, on
the private side, offering volume Gluster volume gf1.

We probably also want to use CTDB for this, since we want failover.

Question:

Do I have each volume brick act as a native Gluster client, use the IB side
to mount the Gluster volume at, say, /mnt/gf1, then have the public side,
using the public CTDB addresses, use NFS or CIFS to mount the
Gluster-mounted directory?  (Such as an NFS export or CIFS share might be
defined to use /mnt/gf1/sequencer_files)?

Can a public network client on the public side even use NFS to mount
Gluster volume gf1-1:/gf1 via Gluster?
(On the private side, HPC cluster nodes are in the private network and will
mount volumes as FUSE clients.)

As you can see, I'm having some trouble with who sees what, what does CTDB
do, etc?
========================================

Here's another question.   Suppose there is a CTDB connection to one of the
bricks offering NFS connections.   If that brick fails, the connection
should failover to one of the other CTDB nodes.   And since the failed
brick is in a replicated environment, the NFS connection will be to a still
fully-functioning Gluster replicated/distributed storage cluster.

==============================================
I also have questions about RDMA.   I understand that RDMA should be faster
than TCP, yet there are conflicting reports about this in real-world
terms.  Also, for fast performance, do people use RDMA or SDP?
I can setup IPoIB without any trouble in connected mode.   But at the
system side how do I configure RDMA connections instead?  How do I then
define the net address for RDMA?  Further, if I'm using RDMA for the bricks
to talk to each other, will the bricks still be available by TCP, say for
ssh purposes, as well?   Or do I have to create a different child IB net
address that uses TCP?   I guess I'm asking this:  For any specific
configurable IB device, would using it for RDMA and TCP be mutually
exclusive?

You can see that at this low level, my network knowledge isn't deep.
A real example of both RDMA configuration and Gluster-using-rdma
configuration would be a very good thing to see?

Any help greatly appreciated.   I should go through the list to see if this
is solved, but I've looked all over gmane, and the answer still isn't
clear, at least to me.

Matt Temple

------
Matt Temple
Director, Research Computing
Dana-Farber Cancer Institute.



On Wed, Oct 31, 2012 at 4:57 PM, John Mark Walker <johnmark at redhat.com>wrote:

> Hi Matthew - did you get a response to this? Harry Mangalam from UC Irvine
> has been doing what sounds like similar things. I would be happy to put you
> guys in touch.
>
> Also, I guess my main question is that I don't understand why or how
> getting "data from sequencers and other sources living on the public
> network /into/ the Gluster volume" is a problem. Are you saying that, as
> structured now, you can't get the data into GlusterFS? Or that you can but
> it's not performing well?
>
> If your GlusterFS servers have gigE NICs on the public network, couldn't
> you just use the NFS server in GlusterFS? Wouldn't that also be available
> over the public network?
>
> -JM
>
>
> ------------------------------
>
> We have a distributed/replicated gluster volume running on IPoIB.    We
> don't yet know much about its performance in practice.   The volume will be
> mounted natively  to our HPC compute cluster and used for nextgen sequence
> analysis.   The HPC compute cluster and the Gluster volume are in the same
> private IB network.
>
> The problem is, we need a way to get data from sequencers and other
> sources living on the public network /into/ the Gluster volume.   The
> Gluster bricks have gigE NICs on the public side in addition to the
> Infiniband connections.   My first thought is to have each Gluster brick
> also act as a Gluster client, mount its own volume, then re-export the
> mount point  by NFS or CIFS to the public network.
>         Alternatively, I could set up some number of servers that are
> /not/ Gluster bricks, but are Gluster clients, and those servers would have
> IB and GigE -- then have those servers re-export the mounted Gluster
> volumes by NFS or CIFS.   Neither of these models seems terribly efficient,
> but getting data into the volume won't be as intense and running analysis
> software against the volume.
>
> 1. Has anyone done this (or something similar)?
> 2. Did it work acceptably?
> 3. Does anyone have a better solution?
>
> (There is an article in which there is a suggestion that Gluster volumes
> be accessible by multiple addresses natively, but it's not implemented
> anywhere as far as I know.)
>
> Matt
>
> ------
> Matt Temple
> Director, Research Computing
> Dana-Farber Cancer Institute.
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121101/3a877fe5/attachment.html>


More information about the Gluster-users mailing list