[Gluster-users] PLEASE READ ! We need your opinion. GSOC-2014 and the Gluster community

André Bauer abauer at magix.net
Mon Mar 17 16:39:15 UTC 2014


Hi,

i vote for 3, 2, 1.

But i dont like the idea to have an extra node for 3, which means
bandwidth/speed of the whole cluster is limited to the interface of the
cache node (like in ceph).

I had some similar whish in mind, but wanted to have a ssd cache in
front of a brick. I know this means you need 4 SSDs on a 4 node cluster
but imho its better than one caching node which is limitng the cluster.


Mit freundlichen Grüßen

André Bauer

MAGIX Software GmbH
André Bauer
Administrator
Postfach 200914
01194 Dresden

Tel. Support Deutschland: 0900/1771115 (1,24 Euro/Min.)
Tel. Support Österreich:  0900/454571 (1,56 Euro/Min.)
Tel. Support Schweiz:     0900/454571 (1,50 CHF/Min.)

Email: mailto:abauer at magix.net
Web:   http://www.magix.com

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Erhard Rein,
Michael Keith, Tilman Herberger
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful.

MAGIX does not warrant that any attachments are free from viruses or
other defects and accepts no liability for any losses resulting from
infected email transmissions. Please note that any views expressed in
this email may be those of the originator and do not necessarily
represent the agenda of the company.

Am 13.03.2014 12:10, schrieb Carlos Capriotti:
> Hello, all.
> 
> I am a little bit impressed by the lack of action on this topic. I hate to
> be "that guy", specially being new here, but it has to be done.
> 
> If I've got this right, we have here a chance of developing Gluster even
> further, sponsored by Google, with a dedicated programmer for the summer.
> 
> In other words, if we play our cards right, we can get a free programmer
> and at least a good start/advance on this fantastic.
> 
> Well, I've checked the trello board, and there is a fair amount of things
> there.
> 
> There are a couple of things that are not there as well.
> 
> I think it would be nice to listen to the COMMUNITY (yes, that means YOU),
> for either suggestions, or at least a vote.
> 
> My opinion, being also my vote, in order of PERSONAL preference:
> 
> 1) There is a project going on (https://forge.gluster.org/disperse), that
> consists on re-writing the stripe module on gluster. This is specially
> important because it has a HUGE impact on Total Cost of Implementation
> (customer side), Total Cost of Ownership, and also matching what the
> competition has to offer. Among other things, it would allow gluster to
> implement a RAIDZ/RAID5 type of fault tolerance, much more efficient, and
> would, as far as I understand, allow you to use 3 nodes as a minimum
> stripe+replication. This means 25% less money in computer hardware, with
> increased data safety/resilience.
> 
> 2) We have a recurring issue with split-brain solution. There is an entry
> on trello asking/suggesting a mechanism that arbitrates this resolution
> automatically. I pretty much think this could come together with another
> solution that is file replication consistency check.
> 
> 3) Accelerator node project. Some storage solutions out there offer an
> "accelerator node", which is, in short, a, extra node with a lot of RAM,
> eventually fast disks (SSD), and that works like a proxy to the regular
> volumes. active chunks of files are moved there, logs (ZIL style) are
> recorded on fast media, among other things. There is NO active project for
> this, or trello entry, because it is something I started discussing with a
> few fellows just a couple of days ago. I thought of starting to play with
> RAM disks (tmpfs) as scratch disks, but, since we have an opportunity to do
> something more efficient, or at the very least start it, why not ?
> 
> Now, c'mon ! Time is running out. We need hands on deck here, for a simple
> vote !
> 
> Can you share 3 lines with your thoughts ?
> 
> Thanks
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 



More information about the Gluster-users mailing list