[Gluster-devel] Rebalance improvement design

Susant Palai spalai at redhat.com
Mon Apr 13 04:25:07 UTC 2015


Hi Ben,
  Uploaded a new patch here: http://review.gluster.org/#/c/9657/. We can start perf test on it. :)

Susant

----- Original Message -----
From: "Susant Palai" <spalai at redhat.com>
To: "Benjamin Turner" <bennyturns at gmail.com>
Cc: "Gluster Devel" <gluster-devel at gluster.org>
Sent: Thursday, 9 April, 2015 3:40:09 PM
Subject: Re: [Gluster-devel] Rebalance improvement design

Thanks Ben. RPM is not available and I am planning to refresh the patch in two days with some more regression fixes. I think we can run the tests post that. Any larger data-set will be good(say 3 to 5 TB).

Thanks,
Susant

----- Original Message -----
From: "Benjamin Turner" <bennyturns at gmail.com>
To: "Vijay Bellur" <vbellur at redhat.com>
Cc: "Susant Palai" <spalai at redhat.com>, "Gluster Devel" <gluster-devel at gluster.org>
Sent: Thursday, 9 April, 2015 2:10:30 AM
Subject: Re: [Gluster-devel] Rebalance improvement design


I have some rebalance perf regression stuff I have been working on, is there an RPM with these patches anywhere so that I can try it on my systems? If not I'll just build from: 


git fetch git:// review.gluster.org/glusterfs refs/changes/57/9657/8 && git cherry-pick FETCH_HEAD 



I will have _at_least_ 10TB of storage, how many TBs of data should I run with? 


-b 


On Tue, Apr 7, 2015 at 9:07 AM, Vijay Bellur < vbellur at redhat.com > wrote: 




On 04/07/2015 03:08 PM, Susant Palai wrote: 


Here is one test performed on a 300GB data set and around 100%(1/2 the time) improvement was seen. 

[root at gprfs031 ~]# gluster v i 

Volume Name: rbperf 
Type: Distribute 
Volume ID: 35562662-337e-4923-b862- d0bbb0748003 
Status: Started 
Number of Bricks: 4 
Transport-type: tcp 
Bricks: 
Brick1: gprfs029-10ge:/bricks/ gprfs029/brick1 
Brick2: gprfs030-10ge:/bricks/ gprfs030/brick1 
Brick3: gprfs031-10ge:/bricks/ gprfs031/brick1 
Brick4: gprfs032-10ge:/bricks/ gprfs032/brick1 


Added server 32 and started rebalance force. 

Rebalance stat for new changes: 
[root at gprfs031 ~]# gluster v rebalance rbperf status 
Node Rebalanced-files size scanned failures skipped status run time in secs 
--------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- 
localhost 74639 36.1GB 297319 0 0 completed 1743.00 
172.17.40.30 67512 33.5GB 269187 0 0 completed 1395.00 
gprfs029-10ge 79095 38.8GB 284105 0 0 completed 1559.00 
gprfs032-10ge 0 0Bytes 0 0 0 completed 402.00 
volume rebalance: rbperf: success: 

Rebalance stat for old model: 
[root at gprfs031 ~]# gluster v rebalance rbperf status 
Node Rebalanced-files size scanned failures skipped status run time in secs 
--------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- 
localhost 86493 42.0GB 634302 0 0 completed 3329.00 
gprfs029-10ge 94115 46.2GB 687852 0 0 completed 3328.00 
gprfs030-10ge 74314 35.9GB 651943 0 0 completed 3072.00 
gprfs032-10ge 0 0Bytes 594166 0 0 completed 1943.00 
volume rebalance: rbperf: success: 


This is interesting. Thanks for sharing & well done! Maybe we should attempt a much larger data set and see how we fare there :). 

Regards, 


Vijay 


______________________________ _________________ 
Gluster-devel mailing list 
Gluster-devel at gluster.org 
http://www.gluster.org/ mailman/listinfo/gluster-devel 

_______________________________________________
Gluster-devel mailing list
Gluster-devel at gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list