[Gluster-users] rebalance and volume commit hash

Nithya Balachandran nbalacha at redhat.com
Tue Jan 24 10:35:00 UTC 2017


On 20 January 2017 at 01:15, Shyam <srangana at redhat.com> wrote:

>
>
> On 01/17/2017 11:40 AM, Piotr Misiak wrote:
>
>>
>> 17 sty 2017 17:10 Jeff Darcy <jdarcy at redhat.com> napisał(a):
>>
>>>
>>> Do you think that is wise to run rebalance process manually on every
>>>> brick with the actual commit hash value?
>>>>
>>>> I didn't do anything with bricks after previous rebalance run and I have
>>>> cluster.weighted-rebalance=off.
>>>>
>>>> My problem is that I have a very big directory structure (millions of
>>>> directories and files) and I haven't ever completed rebalance process
>>>> once, because it will take I guess weeks or months. I'd like to speed it
>>>> up a bit by not generating new commit hash for volume during new
>>>> rebalance run. Then directories rebalanced in the previous run will be
>>>> untouched during the new run. Is it possible?
>>>>
>>>
>>> I'm pretty sure that trying to rebalance on each brick separately will
>>> not work correctly.  Rebalancing smaller pieces of the directory
>>> hierarchy separately, by issuing the appropriate setxattr calls on them
>>> instead of using the CLI, *might* work.  Either way, I think the DHT
>>> experts could provide a better answer.
>>>
>>>
>> Is it possible to start rebalancing from a particular subdirecory? Do you
>> know how? It would be very useful for me.
>>
>
> Rebalancing a sub-directory is not supported.
>
> Having said that, if we were to support it, then we could rebalance at a
> sub-directory level and retain the volume commit hash as is. The commit
> hash states truth about a directory and its immediate children, so
> retaining the volume commit-hash when rebalancing a sub-dir is a
> possibility (needs a little more thought, but at the outset is possible).
>
> Further, for your case, it looks like rebalance takes a long time. So one
> other option could be to just create the link-to files (or complete a tree
> walk and lookup everything) and not move any data. This should be faster
> than moving data around, and provides enough pre-condition for the
> lookup-optimize to function. Of course, this will not balance the data, so
> if are really looking to expand the volume size a full rebalance maybe
> required.
>
> May I request a couple of issues raised for these here [1], and based on
> other DHT work we can check and see when we can accommodate this (or as
> always patches are welcome).
>
> [1] https://github.com/gluster/glusterfs/issues/new
>
> Yes , please do raise an issue and we can take a look at what can be done
here.

>
> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170124/d61db545/attachment.html>


More information about the Gluster-users mailing list