[Gluster-devel] Data classification proposal

Dan Lambright dlambrig at redhat.com
Thu Jun 26 15:54:22 UTC 2014


Implementing brick splitting using LVM would allow you to treat each logical volume (split) as an independent brick. Each split would have its own .glusterfs subdirectory. I think this would help with taking snapshots as well.

----- Original Message -----
From: "Shyamsundar Ranganathan" <srangana at redhat.com>
To: "Krishnan Parthasarathi" <kparthas at redhat.com>
Cc: "Gluster Devel" <gluster-devel at gluster.org>
Sent: Thursday, June 26, 2014 11:13:48 AM
Subject: Re: [Gluster-devel] Data classification proposal

> > > For the short-term, wouldn't it be OK to disallow adding bricks that
> > > is not a multiple of group-size?
> > 
> > In the *very* short term, yes.  However, I think that will quickly
> > become an issue for users who try to deploy erasure coding because those
> > group sizes will be quite large.  As soon as we implement tiering, our
> > very next task - perhaps even before tiering gets into a release -
> > should be to implement automatic brick splitting.  That will bring other
> > benefits as well, such as variable replication levels to handle the
> > sanlock case, or overlapping replica sets to spread a failed brick's
> > load over more peers.
> > 
> 
> OK. Do you have some initial ideas on how we could 'split' bricks? I ask this
> to see if I can work on splitting bricks while the data classification format
> is
> being ironed out.

I see split bricks as creating a logical space for the new aggregate that the brick belongs to. This may not need data movement etc. but just a logical branching at the root of the brick for its membership. Are there counter examples to this?

Unless this changes the weight age of the brick across its aggregates, for example size based weight age for layout assignments, if we are considering schemes of that nature.

So I can see this as follows,

THE_Brick: /data/bricka

Belongs to: aggregate 1 and aggregate 2, so get the following structure beneath it,

/data/bricka/agg_1_ID/<data from aggregate 1 goes here>
/data/bricka/agg_2_ID/<data from aggregate 2 goes here>

Future splits of the bricks add more aggregate ID (not stating where or what this ID is, but assume this is something to distinguish aggregates) parents, and I would expect the xlator to send in requests into its aggregate parent and not root.

One issue that I see with this is, if we wanted to snap an aggregate then we would snap the entire brick.
Another is that how we distinguish the .glusterfs space across the aggregates?

Shyam
_______________________________________________
Gluster-devel mailing list
Gluster-devel at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list