[Gluster-devel] metdata-volume for gluster

Kotresh Hiremath Ravishankar khiremat at redhat.com
Wed Apr 15 06:55:33 UTC 2015


Hi Rajesh,

Couple of questions more.

1. How is metadata-volume automount on reboot of node is take care? Is it advised to add it in /etc/fstab?
2. How does it get mounted on peer add (node) case?

Thanks and Regards,
Kotresh H R

----- Original Message -----
> From: "Rajesh Joseph" <rjoseph at redhat.com>
> To: "Jeff Darcy" <jdarcy at redhat.com>
> Cc: "Gluster Devel" <gluster-devel at gluster.org>
> Sent: Wednesday, April 8, 2015 4:11:24 PM
> Subject: Re: [Gluster-devel] metdata-volume for gluster
> 
> 
> 
> ----- Original Message -----
> > From: "Jeff Darcy" <jdarcy at redhat.com>
> > To: "Rajesh Joseph" <rjoseph at redhat.com>
> > Cc: "Gluster Devel" <gluster-devel at gluster.org>
> > Sent: Wednesday, April 8, 2015 1:53:38 AM
> > Subject: Re: [Gluster-devel] metdata-volume for gluster
> > 
> > > In gluster 3.7 multiple features (Snapshot scheduler, NFS Ganesha,
> > > Geo-rep)
> > > are planning to use
> > > additional volume to store metadata related to these features. This
> > > volume
> > > needs to be manually
> > > created and explicitly managed by an admin.
> > > 
> > > I think creating and managing these many metadata volume would be an
> > > overhead
> > > for an admin. Instead
> > > of that I am proposing to have a single unified metata-volume which can
> > > be
> > > used by all these features.
> > > 
> > > For simplicity and easier management we are proposing to have a
> > > pre-defined
> > > volume name.
> > > If needed this name can be configured using a global gluster option.
> > > 
> > > Please let me know if you have any suggestions or comments.
> > 
> > Do these metadata volumes already exist, or are they being added to designs
> > as we speak?  There seem to be a lot of unanswered questions that suggest
> > the latter.  For example...
> 
> This is being added as we speak.
> 
> > 
> > * What replication level(s) do we need?  What "performance" translators
> >   should be left out to ensure consistency?
> 
> Ideally here we need greater number of replication but since we only support
> x3 replication we would be recommending x3 replication.
> 
> It is still under design therefore we have not finalized on the performance
> translators part. But I think we can leave out of most of the performance
> translators for this.
> 
> > 
> > * How much storage will we need?  How will it be provisioned and tracked?
> 
> As I said earlier in the current release it would be done manually by system
> administrators. And therefore managed by them. In the subsequent release we
> can work towards a management framework.
> 
> > 
> > * What nodes would this volume be hosted on?  Does the user have to
> >   (or get to) decide, or do we decide automatically?  What happens as
> >   the cluster grows or shrinks?
> > 
> > * How are the necessary daemons managed?  From glusterd?  What if we
> >   want glusterd itself to use this facility?
> > 
> > * Will there be an API, so the implementation can be changed to be
> >   compatible with similar facilities already scoped out for 4.0?
> > 
> > I like the idea of this being shared infrastructure.  It would also be
> > nice if it can be done with a minimum of administrative overhead.  To
> > do that, though, I think we need a more detailed exploration of the
> > problem(s) we're trying to solve and of the possible solutions.
> 
> I agree that we should do this with minimum administrative overhead, but
> it really require more detailed investigation and a thorough design. we have
> started exploring in that direction, but we have no workable solution as
> of now.
> 
> My current proposition is to just reduce the number of manual metadata-volume
> required to run gluster.
> 
> Thanks & Regards,
> Rajesh
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 


More information about the Gluster-devel mailing list