[Gluster-devel] Rebalance improvement design

Benjamin Turner bennyturns at gmail.com
Wed Apr 8 20:40:30 UTC 2015


I have some rebalance perf regression stuff I have been working on, is
there an RPM with these patches anywhere so that I can try it on my
systems?  If not I'll just build from:

git fetch git://review.gluster.org/glusterfs refs/changes/57/9657/8 && git
cherry-pick FETCH_HEAD

I will have _at_least_ 10TB of storage, how many TBs of data should I run
with?

-b

On Tue, Apr 7, 2015 at 9:07 AM, Vijay Bellur <vbellur at redhat.com> wrote:

> On 04/07/2015 03:08 PM, Susant Palai wrote:
>
>> Here is one test performed on a 300GB data set and around 100%(1/2 the
>> time) improvement was seen.
>>
>> [root at gprfs031 ~]# gluster v i
>>
>> Volume Name: rbperf
>> Type: Distribute
>> Volume ID: 35562662-337e-4923-b862-d0bbb0748003
>> Status: Started
>> Number of Bricks: 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: gprfs029-10ge:/bricks/gprfs029/brick1
>> Brick2: gprfs030-10ge:/bricks/gprfs030/brick1
>> Brick3: gprfs031-10ge:/bricks/gprfs031/brick1
>> Brick4: gprfs032-10ge:/bricks/gprfs032/brick1
>>
>>
>> Added server 32 and started rebalance force.
>>
>> Rebalance stat for new changes:
>> [root at gprfs031 ~]# gluster v rebalance rbperf status
>>                                      Node Rebalanced-files          size
>>      scanned      failures       skipped               status   run time in
>> secs
>>                                 ---------      -----------   -----------
>>  -----------   -----------   -----------         ------------
>>  --------------
>>                                 localhost            74639        36.1GB
>>       297319             0             0            completed
>> 1743.00
>>                              172.17.40.30            67512        33.5GB
>>       269187             0             0            completed
>> 1395.00
>>                             gprfs029-10ge            79095        38.8GB
>>       284105             0             0            completed
>> 1559.00
>>                             gprfs032-10ge                0        0Bytes
>>            0             0             0            completed
>>  402.00
>> volume rebalance: rbperf: success:
>>
>> Rebalance stat for old model:
>> [root at gprfs031 ~]# gluster v rebalance rbperf status
>>                                      Node Rebalanced-files          size
>>      scanned      failures       skipped               status   run time in
>> secs
>>                                 ---------      -----------   -----------
>>  -----------   -----------   -----------         ------------
>>  --------------
>>                                 localhost            86493        42.0GB
>>       634302             0             0            completed
>> 3329.00
>>                             gprfs029-10ge            94115        46.2GB
>>       687852             0             0            completed
>> 3328.00
>>                             gprfs030-10ge            74314        35.9GB
>>       651943             0             0            completed
>> 3072.00
>>                             gprfs032-10ge                0        0Bytes
>>       594166             0             0            completed
>> 1943.00
>> volume rebalance: rbperf: success:
>>
>>
> This is interesting. Thanks for sharing & well done! Maybe we should
> attempt a much larger data set and see how we fare there :).
>
> Regards,
>
> Vijay
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150408/86ca7658/attachment.html>


More information about the Gluster-devel mailing list