[Gluster-devel] Release 3.10 feature proposal : Volume expansion on tiered volumes.

Shyam srangana at redhat.com
Thu Dec 8 12:35:27 UTC 2016


Hi Hari,

Thanks for posting this issue to be considered part of 3.10.

I have a few questions inline.

Shyam

On 12/08/2016 01:23 AM, Hari Gowtham wrote:
> Hi,
>
> To support add/remove brick on tiered volumes we are planing to separate
> the tier into a separate process in the service framework and add the
> add/remove brick support. Later the users will be able to spawn rebalance
> on tiered volumes (which is not possible).

I assume tier as a separate process is from the rebalance deamon 
perspective, right? Or, is it about separating the xlator cod efrom DHT?

Also, Dan would like your comments as Tier maintainer, on the maturity 
of the below proposal for 3.10 inclusion? Could you also add the 
required labels [2] to the issue as you see fit, and if this passes your 
inspection, then let us know and I can mark it for 3.10 milestone in github.

>
> The following are the steps planed to be performed:
>
> *) tier as a service (final stages of code review)

Can we get links to the code, and also the design spec if available, for 
the above (and possibly as a whole)

> *) we are separating the attach tier from add brick and detach from
>    remove brick.
> *) infra to support add/remove brick.
> *) rebalance process on a tiered volume.
> *) a few patches to take care of the issues that will be arising
>    eg: while adding a brick on a tiered volume, the tier process has to
>    be stopped as the graph switch occurs. and other issues like this.
>
> The whole volume expansion will be in an experimental state. while the
> separation of tier into a separate service framework and attach/detach
> tier separation from add/remove brick should be back to stable state before
> the release of 3.10

What is the mitigation plan in case this does not get stable? Would you 
have all commits in ready but not merged state till it is stable?

This looks like a big change, and also something that has been going on 
for some time now, based on your comments above.

>
> [1] https://github.com/gluster/glusterfs/issues/54
[2] https://github.com/gluster/glusterfs/labels


More information about the Gluster-devel mailing list