[Gluster-users] New to GlusterFS

Jay Vyas jayunit100 at gmail.com
Tue Oct 22 11:42:46 UTC 2013


So Im sure it would work for you the same as any distributed fs would , and you could always optimize things later if it wasn't fast enough.  Gluster is very tune able.

I'm curious -- do you know what your access patterns are going to be?  Is this for testing or for a real production system?

1) If the kvm boxes are simply a way for you to have different services that might scale up/down and unifying their storage then it sounds plausible.  Certainly it's easier than unifying a bunch of manually curated virtual disks.

2) And remember - gluster is very stackable - and hadoop friendly, so you can put it underneath mapreduce if you want to process your data in parallel - or else, with volumes, you can decrease/increase replication of certain areas customarily.

3) Compared to alternatives like nfs/hdfs gluster will give you fast lookups, no need for a centralized  server , and no SPOF... And is super easy to install. 

FYI we have vagrant setups for two node VMs, and would love to automate gluster on kvm configurations for people like yourself to try out.

> On Oct 22, 2013, at 5:57 AM, JC Putter <jcputter at gmail.com> wrote:
> 
> Hi,
> 
> I am new to GlusterFS, i am trying to accomplish something which i am
> not 100% sure is the correct use case but hear me out.
> 
> I want to use GlusterFS to host KVM VM's, from what I've read this was
> not recommended due to poor write performance however since
> libgfapi/qemu 1.3  this is now viable ?
> 
> 
> Currently i'am testing out GlusterFS with two nodes, both running as
> server and client
> 
> i have the following Volume:
> 
> Volume Name: DATA
> Type: Replicate
> Volume ID: eaa7746b-a1c1-4959-ad7d-743ac519f86a
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: glusterfs1.example.com:/data
> Brick2: glusterfs2.example.com:/data
> 
> 
> and mounting the brick locally on each server as /mnt/gluster,
> replication works and everything but as soon as i kill one node, the
> directory /mnt/gluster/ becomes unavailable for 30/40 seconds
> 
> log shows
> 
> [2013-10-22 11:55:48.055571] W [socket.c:514:__socket_rwv]
> 0-DATA-client-0: readv failed (No data available)
> 
> 
> Thanks in advance!
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list