[Bugs] [Bug 1175214] [RFE] Rebalance Performance Improvements

bugzilla at redhat.com bugzilla at redhat.com
Wed Mar 18 14:16:46 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1175214



--- Comment #9 from Susant Kumar Palai <spalai at redhat.com> ---
Ran rebalance on a 300GB data set this time. And collected various system
stats(Thanks to Manoj Pillai for tool recommendation).

The stats can be found here:
http://perf1.perf.lab.eng.bos.redhat.com/mpillai/susant_rebalance/perf_reb_results/

Note: The initial number before each file represent the server info.


Here is the rebalance stat.

[root at gprfs029 ~]# gluster v rebalance rbperf status
                                    Node Rebalanced-files          size      
scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------  
-----------   -----------   -----------         ------------     --------------
                               localhost            93726        44.6GB       
687528             0             0            completed            6108.00
                           gprfs032-10ge                0        0Bytes       
594060             0             0            completed            2913.00
                           gprfs030-10ge            74492        37.7GB       
651780             0             0            completed            5525.00
                           gprfs031-10ge            87034        45.0GB       
631859             0             0            completed            6065.00

Here is the volume configuration:
Volume Name: rbperf
Type: Distributed-Replicate
Volume ID: 5bc10510-1092-40d4-b57f-929a5846603e
Status: Started
Snap Volume: no
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: gprfs029-10ge:/bricks/gprfs029/brick3
Brick2: gprfs029-10ge:/bricks/gprfs029/brick4
Brick3: gprfs030-10ge:/bricks/gprfs030/brick1
Brick4: gprfs030-10ge:/bricks/gprfs030/brick2
Brick5: gprfs031-10ge:/bricks/gprfs031/brick1
Brick6: gprfs031-10ge:/bricks/gprfs031/brick2
Brick7: gprfs032-10ge:/bricks/gprfs032/brick1
Brick8: gprfs032-10ge:/bricks/gprfs032/brick2
Options Reconfigured:
performance.readdir-ahead: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable


Regards,
Susant

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=HfXtCFmM2x&a=cc_unsubscribe


More information about the Bugs mailing list