[Gluster-users] Using local writes with gluster for temporary storage
Frank Sonntag
f.sonntag at metocean.co.nz
Wed Nov 14 19:16:10 UTC 2012
Hello,
We are interested in exactly the same use case and I am very keen to hear how to do this with gluster after v3.2.
Thanks,
Frank
Frank Sonntag
Meteorologist, MetOcean Solutions Ltd
PO Box 441, New Plymouth, New Zealand 4340
T: +64 7-825 0540
www.metocean.co.nz
On 15/11/2012, at 6:45 AM, Pat Haley wrote:
>
> Hi,
>
> We have a cluster with 130 compute nodes with an NAS-type
> central storage under gluster (3 bricks, ~50TB). When we
> run large number of ocean models we can run into bottlenecks
> with many jobs trying to write to our central storage.
> It was suggested to us that we could also used gluster to
> unite the disks on the compute nodes into a single "disk"
> in which files would be written locally. Then we could
> move the files after the runs were complete in a more
> sequential manner (thus avoiding overloading the network).
>
> What was originally suggested (the NUFA policy) has since
> been deprecated. What would be the recommended method
> of accomplishing our goal in the latest version of Gluster?
> And where can we find documentation on it?
>
> We have seen the following links, but would be interested
> in any more pointers you may have. Thanks.
>
> http://thr3ads.net/gluster-users/2012/06/1941337-how-to-enable-nufa-in-3.2.6
>
> http://blog.aeste.my/2012/05/15/glusterfs-3-2-updates/
>
> http://www.gluster.org/2012/05/back-door-async-replication/
>
> https://github.com/jdarcy/bypass
>
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Pat Haley Email: phaley at mit.edu
> Center for Ocean Engineering Phone: (617) 253-6824
> Dept. of Mechanical Engineering Fax: (617) 253-8125
> MIT, Room 5-213 http://web.mit.edu/phaley/www/
> 77 Massachusetts Avenue
> Cambridge, MA 02139-4301
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list