[Gluster-users] [Gluster-devel] PLEASE READ ! We need your opinion. GSOC-2014 and the Gluster community

Dan Lambright dlambrig at redhat.com
Thu Mar 13 13:45:18 UTC 2014


I had a chat with Carlos on this subject (#3) the other day, from my point of view it was very interesting to have a look at how Isilon leverages new backend storage technologies. Ceph is of course making claims in this area as well.  So we (gluster) do indeed need to make some progress in this area. I think the reason you do not see a formal project in this space is we are just starting to research how to use SSDs (put meta-data/change logs in them?) and we need to get proof of concepts underway. There have been some early performance tests with mixed results. 

Note there is a "data classification/tiering" project underway, which (I believe) will place data according to rules. This dovetails nicely with SSDs as data could be placed on fast or slow memory according to configuration. You may even consider that a step forward in support for SSDs.

I think RAM disks as a stand-in for SSDs is a good way to prototype ideas - particularly as tmpfs supports attributes as you pointed out when we chatted. People in the gluster family may want to consider that as a testing approach, and broadcast to the mailing list or blog their findings :)

----- Original Message -----
From: "Sabuj Pattanayek" <sabujp at gmail.com>
To: "Jeff Darcy" <jdarcy at redhat.com>
Cc: "Carlos Capriotti" <capriotti.carlos at gmail.com>, "gluster-users Discussion List" <gluster-users at gluster.org>, "Gluster Devel" <gluster-devel at nongnu.org>
Sent: Thursday, March 13, 2014 9:12:30 AM
Subject: Re: [Gluster-devel] [Gluster-users] PLEASE READ ! We need your opinion. GSOC-2014 and the Gluster community

has the 32 group limit been fixed yet? If not how about that :) ? https://bugzilla.redhat.com/show_bug.cgi?id=789961 


On Thu, Mar 13, 2014 at 8:01 AM, Jeff Darcy < jdarcy at redhat.com > wrote: 



> 3) Accelerator node project. Some storage solutions out there offer an 
> "accelerator node", which is, in short, a, extra node with a lot of RAM, 
> eventually fast disks (SSD), and that works like a proxy to the regular 
> volumes. active chunks of files are moved there, logs (ZIL style) are 
> recorded on fast media, among other things. There is NO active project for 
> this, or trello entry, because it is something I started discussing with a 
> few fellows just a couple of days ago. I thought of starting to play with 
> RAM disks (tmpfs) as scratch disks, but, since we have an opportunity to do 
> something more efficient, or at the very least start it, why not ? 

Looks like somebody has read the Isilon marketing materials. ;) 

A full production-level implementation of this, with cache consistency and 
so on, would be a major project. However, a non-consistent prototype good 
for specific use cases - especially Hadoop, as Jay mentions - would be 
pretty easy to build. Having a GlusterFS server (for the real clients) 
also be a GlusterFS client (to the real cluster) is pretty straightforward. 
Testing performance would also be a significant component of this, and IMO 
that's something more developers should learn about early in their careers. 
I encourage you to keep thinking about how this could be turned into a real 
GSoC proposal. 


Keep the ideas coming! 
_______________________________________________ 
Gluster-users mailing list 
Gluster-users at gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users 


_______________________________________________
Gluster-devel mailing list
Gluster-devel at nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel



More information about the Gluster-users mailing list