[Gluster-devel] [Gluster-users] PLEASE READ ! We need your opinion. GSOC-2014 and the Gluster community

John Mark Walker johnmark at gluster.org
Tue Mar 18 15:38:21 UTC 2014


Thanks, Kaushal!

The next steps are the following:

- find out the deadlines for student proposals. If we're going with the Fedora umbrella, we need to get any student proposals submitted very soon
- write out proposals on fedora project ideas page
- submit proposals, along with proposed mentors
- track down Vipul and see if he's still interested - I believe he said he was working with KP? KP - can you confirm this?

We need to move quickly if we have any hope of getting projects submitted for this year. Please ask me for any help you need - if you don't know the Fedora folks involved, I can intro you.

Thanks, everyone!

-JM


----- Original Message -----
> I had a discussion with some developers here in the office regarding
> this. We created a list of ideas which we thought could be suitable
> for student projects. I've added these to [1]. But I'm also putting
> them on here for more visibility.
> 
> (I've tried to arrange the list in descending order of difficulty as I find
> it)
> 
> . Glusterd services high availablity
>     Glusterd should restart the processes it manages, bricks, nfs
> server, self-heal daemon & quota daemon, whenever it detects they have
> died.
> . glusterfsiostat - Top like utility for glusterfs
>     These are client side tools which will display stats from the
> io-stats translator. I'm not currently sure of the difference between
> the two.
> . ovirt gui for stats
>     Have pretty graphs and tables in ovirt for the GlusterFS top and
> profile commands.
> . monitoring integrations - munin others.
>     The more monitoring support we have for GlusterFS the better.
> . More compression algorithms for compression xlator
>     The onwire compression translator should be extended to support
> more compression algorithms. Ideally it should be pluggable.
> . cinder glusterfs backup driver
>     Write a driver for cinder, a part of openstack, to allow backup
> onto GlusterFS volumes
> . rsockets - sockets for rdma transport
>     Coding for RDMA using the familiar socket api should lead to a
> more robust rdma transport
> . data import tool
>     Create a tool which will allow already importing already existing
> data in the brick directories into the gluster volume. This is most
> likely going to be a special rebalance process.
> . rebalance improvements
>     Improve rebalance preformance.
> . Improve the meta translator
>     The meta xlator provides a /proc like interface to GlusterFS
> xlators. We could further improve this and make it a part of the
> standard volume graph.
> . geo-rep using rest-api
>     This might be suitable for geo replication over WAN. Using
> rsync/ssh over WAN isn't too nice.
> . quota using underlying fs quota
>     GlusterFS quota is currently maintained completely in GlusterFSs
> namespace using xattrs. We could make use of the quota capabilities of
> the underlying fs (XFS) for better performance.
> . snapshot pluggability
>     Snapshot should be able to make use of snapshot support provided
> by btrfs for example.
> . compression at rest
>     Lessons learnt while implementing encryption at rest can be used
> with the compression at rest.
> . file-level deduplication
>     GlusterFS works on files. So why not have dedup at the level files as
>     well.
> . composition xlator for small files
>     Merge smallfiles into a designated large file using our own custom
> semantics. This can improve our small file performance.
> . multi master geo-rep
>     Nothing much to say here. This has been discussed many times.
> 
> Any comments on this list?
> ~kaushal
> 
> [1] http://www.gluster.org/community/documentation/index.php/Projects
> 
> On Tue, Mar 18, 2014 at 9:07 AM, Lalatendu Mohanty <lmohanty at redhat.com>
> wrote:
> > On 03/13/2014 11:49 PM, John Mark Walker wrote:
> >>
> >> ----- Original Message -----
> >>
> >>> Welcome, Carlos.  I think it's great that you're taking initiative here.
> >>
> >> +1 - I love enthusiastic fresh me^H^H^H^H^H^H^H^Hcommunity members! :)
> >>
> >>
> >>> However, it's also important to set proper expectations for what a GSoC
> >>> intern
> >>> could reasonably be expected to achieve.  I've seen some amazing stuff
> >>> out of
> >>> GSoC, but if we set the bar too high then we end up with incomplete code
> >>> and
> >>> the student doesn't learn much except frustration.
> >>
> >> This. The reason we haven't really participated in GSoC is not because we
> >> don't want to - it's because it's exceptionally difficult for a project of
> >> our scope, but that doesn't mean there aren't any possibilities. As an
> >> example, last year the Open Source Lab at OSU worked with a student to
> >> create an integration with Ganeti, which was mostly successful, and I
> >> think
> >> work has continued on that project. That's an example of a project with
> >> the
> >> right scope.
> >
> >
> > IMO integration projects are ideal fits for GSoc. I can see some
> > information
> > in Trello back log i.e. under "Ecosystem Integration". But not sure of
> > their
> > current status. I think we should again take look on these and see if
> > something can be done through GSoc.
> >
> >
> >>>> 3) Accelerator node project. Some storage solutions out there offer an
> >>>> "accelerator node", which is, in short, a, extra node with a lot of RAM,
> >>>> eventually fast disks (SSD), and that works like a proxy to the regular
> >>>> volumes. active chunks of files are moved there, logs (ZIL style) are
> >>>> recorded on fast media, among other things. There is NO active project
> >>>> for
> >>>> this, or trello entry, because it is something I started discussing with
> >>>> a
> >>>> few fellows just a couple of days ago. I thought of starting to play
> >>>> with
> >>>> RAM disks (tmpfs) as scratch disks, but, since we have an opportunity to
> >>>> do
> >>>> something more efficient, or at the very least start it, why not ?
> >>>
> >>> Looks like somebody has read the Isilon marketing materials.  ;)
> >>>
> >>> A full production-level implementation of this, with cache consistency
> >>> and
> >>> so on, would be a major project.  However, a non-consistent prototype
> >>> good
> >>> for specific use cases - especially Hadoop, as Jay mentions - would be
> >>> pretty easy to build.  Having a GlusterFS server (for the real clients)
> >>> also be a GlusterFS client (to the real cluster) is pretty
> >>> straightforward.
> >>> Testing performance would also be a significant component of this, and
> >>> IMO
> >>> that's something more developers should learn about early in their
> >>> careers.
> >>> I encourage you to keep thinking about how this could be turned into a
> >>> real
> >>> GSoC proposal.
> >>
> >> Excellent. This has possibilities.
> >>
> >> Another possibility is in the mobile app space. I think it would be
> >> awesome to port GFAPI to Android, for example. Or to make use of the
> >> python
> >> or ruby bindings for GFAPI to create a server-side RESTful API that a
> >> mobile
> >> app can access.
> >>
> >> -JM
> >>
> >>
> >>
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> 




More information about the Gluster-devel mailing list