[Gluster-users] Newbee Question: GlusterFS on Compute Cluster?

elvinas.piliponis at barclays.com elvinas.piliponis at barclays.com
Sun May 12 16:14:29 UTC 2013

Hello Adam,

> On my compute cluster we use round-robin dns (for HA of the volume definition) and mount the GlusterFS volume via the FUSE (native) client
Can you please be more detailed on this? I have somewhat similar copute-storage setup and pointed each compute node to itself as a GlusterFS volume server via FUSE mount. Would be there any difference?

Thank you
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Adam Tygart
Sent: 11 May 2013 01:46
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] Newbee Question: GlusterFS on Compute Cluster?


On my compute cluster we use round-robin dns (for HA of the volume definition) and mount the GlusterFS volume via the FUSE (native) client. All of the I/O would go directly to the nodes, rather than through an intermediary (NFS) server.

Adam Tygart
Beocat Sysadmin

On Fri, May 10, 2013 at 5:38 PM, Bradley, Randy <Randy.Bradley at ars.usda.gov<mailto:Randy.Bradley at ars.usda.gov>> wrote:

I've got a 24 node compute cluster.  Each node has one extra terabyte drive.  It seemed reasonable to install Gluster on each of the compute nodes and the head node.  I created a volume from the head node:

gluster volume create gv1 rep 2 transport tcp compute000:/export/brick1 compute001:/export/brick1 compute002:/export/brick1 compute003:/export/brick1 compute004:/export/brick1 compute005:/export/brick1 compute006:/export/brick1 compute007:/export/brick1 compute008:/export/brick1 compute009:/export/brick1 compute010:/export/brick1 compute011:/export/brick1 compute012:/export/brick1 compute013:/export/brick1 compute014:/export/brick1 compute015:/export/brick1 compute016:/export/brick1 compute017:/export/brick1 compute018:/export/brick1 compute019:/export/brick1 compute020:/export/brick1 compute021:/export/brick1 compute022:/export/brick1 compute023:/export/brick1

And then I mounted the volume on the head node.  So far, so good.  Apx. 10 TB available.

Now I would like each compute node to be able to access files on this volume.  Would this be done by NFS mount from the head node to the compute nodes or is there a better way?



This electronic message contains information generated by the USDA solely for the intended recipients. Any unauthorized interception of this message or the use or disclosure of the information it contains may violate the law and subject the violator to civil or criminal penalties. If you believe you have received this message in error, please notify the sender and delete the email immediately.

Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>

This e-mail and any attachments are confidential and intended
solely for the addressee and may also be privileged or exempt from
disclosure under applicable law. If you are not the addressee, or
have received this e-mail in error, please notify the sender
immediately, delete it from your system and do not copy, disclose
or otherwise act upon any part of this e-mail or its attachments.

Internet communications are not guaranteed to be secure or
The Barclays Group does not accept responsibility for any loss
arising from unauthorised access to, or interference with, any
Internet communications by any third party, or from the
transmission of any viruses. Replies to this e-mail may be
monitored by the Barclays Group for operational or business

Any opinion or other information in this e-mail or its attachments
that does not relate to the business of the Barclays Group is
personal to the sender and is not given or endorsed by the Barclays

Barclays Bank PLC. Registered in England and Wales (registered no.
Registered Office: 1 Churchill Place, London, E14 5HP, United

Barclays Bank PLC is authorised and regulated by the Financial
Services Authority.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130512/1934cce4/attachment.html>

More information about the Gluster-users mailing list