[Gluster-users] Newbee Question: GlusterFS on Compute Cluster?

James purpleidea at gmail.com
Sat May 11 00:49:59 UTC 2013


On Fri, 2013-05-10 at 17:57 -0500, Adam Tygart wrote:
> James,
Hey!

> 
> vrrp works fine, but from what I understand of the Gluster FUSE mount
> process is that it will read the whole of the rrdns response and keep
> trying to get the volfile from the others if the first server fails to
> respond.
Interesting... I didn't know that it did this. Can someone verify?

Also, you could still have problems with cached dns records. I'm sure rr
is a fine solution, and I might be being a bit pedantic. The nice thing
is that you get to choose!

Good luck,
James

> 
> --
> Adam
> 
> 
> On Fri, May 10, 2013 at 5:53 PM, James <purpleidea at gmail.com> wrote:
> 
> > On Fri, May 10, 2013 at 6:45 PM, Adam Tygart <mozes at k-state.edu> wrote:
> > > Randy,
> > >
> > > On my compute cluster we use round-robin dns (for HA of the volume
> > > definition) and mount the GlusterFS volume via the FUSE (native) client.
> > All
> > > of the I/O would go directly to the nodes, rather than through an
> > > intermediary (NFS) server.
> > I've mentioned this once before, but in my opinion, using something
> > like vrrp (eg: keepalived) is better than using rr-dns. Also it's
> > cooler.
> >
> > James
> >
> > >
> > > --
> > > Adam Tygart
> > > Beocat Sysadmin
> > > www.beocat.cis.ksu.edu
> > >
> > >
> > > On Fri, May 10, 2013 at 5:38 PM, Bradley, Randy <
> > Randy.Bradley at ars.usda.gov>
> > > wrote:
> > >>
> > >>
> > >> I've got a 24 node compute cluster.  Each node has one extra terabyte
> > >> drive.  It seemed reasonable to install Gluster on each of the compute
> > nodes
> > >> and the head node.  I created a volume from the head node:
> > >>
> > >> gluster volume create gv1 rep 2 transport tcp compute000:/export/brick1
> > >> compute001:/export/brick1 compute002:/export/brick1
> > >> compute003:/export/brick1 compute004:/export/brick1
> > >> compute005:/export/brick1 compute006:/export/brick1
> > >> compute007:/export/brick1 compute008:/export/brick1
> > >> compute009:/export/brick1 compute010:/export/brick1
> > >> compute011:/export/brick1 compute012:/export/brick1
> > >> compute013:/export/brick1 compute014:/export/brick1
> > >> compute015:/export/brick1 compute016:/export/brick1
> > >> compute017:/export/brick1 compute018:/export/brick1
> > >> compute019:/export/brick1 compute020:/export/brick1
> > >> compute021:/export/brick1 compute022:/export/brick1
> > >> compute023:/export/brick1
> > >>
> > >> And then I mounted the volume on the head node.  So far, so good.  Apx.
> > 10
> > >> TB available.
> > >>
> > >> Now I would like each compute node to be able to access files on this
> > >> volume.  Would this be done by NFS mount from the head node to the
> > compute
> > >> nodes or is there a better way?
> > >>
> > >>
> > >> Thanks,
> > >>
> > >> Randy
> > >>
> > >>
> > >>
> > >>
> > >> This electronic message contains information generated by the USDA
> > solely
> > >> for the intended recipients. Any unauthorized interception of this
> > message
> > >> or the use or disclosure of the information it contains may violate the
> > law
> > >> and subject the violator to civil or criminal penalties. If you believe
> > you
> > >> have received this message in error, please notify the sender and
> > delete the
> > >> email immediately.
> > >>
> > >> _______________________________________________
> > >> Gluster-users mailing list
> > >> Gluster-users at gluster.org
> > >> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> > >
> > >
> > >
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org
> > > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130510/f1c50c1e/attachment.sig>


More information about the Gluster-users mailing list