[Gluster-devel] Introducing Heketi: Storage Management Framework with Plugins for GlusterFS volumes

Jay Vyas jayunit100 at gmail.com
Thu Jun 18 13:33:12 UTC 2015


Thanks for annoying letting us know about  this so early on; I don't fully understand the plugin functionality .... So I have some general questions. But am very interested in this.

0) I notice it's largely written in go,compared to the gluster ecosystem which is mostly Python and C.  Any reason why?  I like go but just curious if there are some technical requirements driving this and if Go might become a more integral part of the gluster core in the future.

1) is heketi something that would allow things like raid and lvm and so on to be moved from core dependencies into extensions, thus modularizing using glusters core?

2) Will heketi actually be co-evolving / working hand in hand with fluster so that some of the storage administration stuff in glusters code base is moved into a broader framework?


> On Jun 18, 2015, at 9:07 AM, Joseph Fernandes <josferna at redhat.com> wrote:
> 
> Agreed we need not be depended ONE technology for the above.
> But LVM is a strong contender as a single stable underlying technology that provides the following.
> We can make it plugin based :) . So that ppl who have LVM and are happy with it can use it.
> And we still can have other technology plugins developed in parallel, but let have Single API standard defined for all.
> 
> ~Joe
> 
> ----- Original Message -----
> From: "Jeff Darcy" <jdarcy at redhat.com>
> To: "Joseph Fernandes" <josferna at redhat.com>
> Cc: "Luis Pabon" <lpabon at redhat.com>, "Gluster Devel" <gluster-devel at gluster.org>, "John Spray" <jspray at redhat.com>
> Sent: Thursday, June 18, 2015 5:15:37 PM
> Subject: Re: Introducing Heketi: Storage Management Framework with Plugins    for GlusterFS volumes
> 
>> LVM or Volume Manager Dependencies:
>> 1) SNAPSHOTS: Gluster snapshots are LVM based
> 
> The current implementation is LVM-centric, which is one reason uptake has
> been so low.  The intent was always to make it more generic, so that other
> mechanisms could be used as well.
> 
> 
> 
>> 2) PROVISIONING and ENFORCEMENT:
>> As of today Gluster does not have any control on the size of the brick. It
>> will consume the brick (xfs)mount point given
>> to it without checking on how much it needs to consume. LVM (or any other
>> volume manager) will be required to do space provisioning per brick and
>> enforce
>> limits on size of bricks.
> 
> Some file systems have quota, or we can enforce our own.
> 
>> 3) STORAGE SEGREGATION:
>> LVM pools can be used to have storage segregation i.e having primary storage
>> pools and secondary(for Gluster replica) pools,
>> So that we can crave out proper space from the physical disks attached to
>> each node.
>> At a high level (i.e Heketi's User) disk space can be viewed as storage
>> pools,(i.e by aggreating disk space per pool per node using glusterd)
>> To start with we can have Primary pool and secondary pool(for Gluster
>> replica) , where each file serving node in the cluster participates in
>> these pools via the local LVM pools.
> 
> This functionality in no way depends on LVM.  In many cases, mere
> subdirectories are sufficient.
> 
>> 4) DATA PROTECTION:
>>   Further data protection using LVM RAID. Pools can be marked to have RAID
>>   support on them, courtesy of LVM RAID.
>>   https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/raid_volumes.html
> 
> Given that we already have replication, erasure coding, etc. many users would
> prefer not to reduce storage utilization even further with RAID.  Others
> would prefer to get the same functionality without LVM, e.g. with ZFS.  That's
> why RAID has always been - and should remain - optional.
> 
> It's fine that we *can* use LVM features when and where they're available.
> Building in *dependencies* on it has been a mistake every time, and repeating
> a mistake doesn't make it anything else.
> 
> JOE: 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list