[Gluster-users] Newbee Question: GlusterFS on Compute Cluster?

Adam Tygart mozes at k-state.edu
Fri May 10 22:57:23 UTC 2013


James,

vrrp works fine, but from what I understand of the Gluster FUSE mount
process is that it will read the whole of the rrdns response and keep
trying to get the volfile from the others if the first server fails to
respond.

--
Adam


On Fri, May 10, 2013 at 5:53 PM, James <purpleidea at gmail.com> wrote:

> On Fri, May 10, 2013 at 6:45 PM, Adam Tygart <mozes at k-state.edu> wrote:
> > Randy,
> >
> > On my compute cluster we use round-robin dns (for HA of the volume
> > definition) and mount the GlusterFS volume via the FUSE (native) client.
> All
> > of the I/O would go directly to the nodes, rather than through an
> > intermediary (NFS) server.
> I've mentioned this once before, but in my opinion, using something
> like vrrp (eg: keepalived) is better than using rr-dns. Also it's
> cooler.
>
> James
>
> >
> > --
> > Adam Tygart
> > Beocat Sysadmin
> > www.beocat.cis.ksu.edu
> >
> >
> > On Fri, May 10, 2013 at 5:38 PM, Bradley, Randy <
> Randy.Bradley at ars.usda.gov>
> > wrote:
> >>
> >>
> >> I've got a 24 node compute cluster.  Each node has one extra terabyte
> >> drive.  It seemed reasonable to install Gluster on each of the compute
> nodes
> >> and the head node.  I created a volume from the head node:
> >>
> >> gluster volume create gv1 rep 2 transport tcp compute000:/export/brick1
> >> compute001:/export/brick1 compute002:/export/brick1
> >> compute003:/export/brick1 compute004:/export/brick1
> >> compute005:/export/brick1 compute006:/export/brick1
> >> compute007:/export/brick1 compute008:/export/brick1
> >> compute009:/export/brick1 compute010:/export/brick1
> >> compute011:/export/brick1 compute012:/export/brick1
> >> compute013:/export/brick1 compute014:/export/brick1
> >> compute015:/export/brick1 compute016:/export/brick1
> >> compute017:/export/brick1 compute018:/export/brick1
> >> compute019:/export/brick1 compute020:/export/brick1
> >> compute021:/export/brick1 compute022:/export/brick1
> >> compute023:/export/brick1
> >>
> >> And then I mounted the volume on the head node.  So far, so good.  Apx.
> 10
> >> TB available.
> >>
> >> Now I would like each compute node to be able to access files on this
> >> volume.  Would this be done by NFS mount from the head node to the
> compute
> >> nodes or is there a better way?
> >>
> >>
> >> Thanks,
> >>
> >> Randy
> >>
> >>
> >>
> >>
> >> This electronic message contains information generated by the USDA
> solely
> >> for the intended recipients. Any unauthorized interception of this
> message
> >> or the use or disclosure of the information it contains may violate the
> law
> >> and subject the violator to civil or criminal penalties. If you believe
> you
> >> have received this message in error, please notify the sender and
> delete the
> >> email immediately.
> >>
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130510/2c7315fd/attachment.html>


More information about the Gluster-users mailing list