[Gluster-users] 3.4 volume rebalance status outputting nonsense

Pierre-Francois Laquerre pierre.francois at nec-labs.com
Wed Jul 17 18:29:53 UTC 2013


Ever since updating my 25x2 distributed-replicate volume from 3.3.1 to 
3.4, "gluster volume rebalance $myvolume status" has been outputting 
nonsensical information and hostnames:

[root at ml54 ~]# gluster volume rebalance bigdata status
                                     Node Rebalanced-files size       
scanned      failures         status run time in secs
                                ---------      ----------- -----------   
-----------   -----------   ------------ --------------
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                localhost                0 0Bytes       
1407934             5    in progress         60013.00
                                     ml59            22698 3.0GB       
1022093         23956    in progress         60013.00
volume rebalance: bigdata: success:

The first obvious weird thing here is that everything is listed as 
localhost, whereas one would expect to have one line per server. The 
other weird thing is that almost everything is at 0 even though this has 
been running for ~16 hours.

Same command from another server at almost the same time:
[root at ml01 ~]# gluster volume rebalance bigdata status
                                     Node Rebalanced-files size       
scanned      failures         status run time in secs
                                ---------      ----------- -----------   
-----------   -----------   ------------ --------------
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                localhost              670 9.4MB       
2917849         30813    in progress         60020.00
                                     ml26                0 0Bytes       
1406422             0    in progress         60019.00
volume rebalance: bigdata: success:

Running the command yet again from ml01 a few minutes later:

[root at ml01 ~]# gluster volume rebalance bigdata status
                                     Node Rebalanced-files size       
scanned      failures         status run time in secs
                                ---------      ----------- -----------   
-----------   -----------   ------------ --------------
                                localhost              670 9.4MB       
2918822         30813    in progress         60458.00
                                     ml57                0 0Bytes       
1418620             0    in progress         60457.00
                                     ml59            22908 3.0GB       
1025934         24101    in progress         60457.00
                                     ml47            19142 6.8GB       
1119561         44194    in progress         60457.00
                                     ml56             9789 1.4GB       
1276682         78928    in progress         60457.00
                                     ml55            23265 3.1GB       
1002220          8771    in progress         60457.00
                                     ml26                0 0Bytes       
1419357             0    in progress         60457.00
                                     ml30            23844 2.6GB        
957613         15464    in progress         60457.00
                                     ml29                0 0Bytes       
1398930             0    in progress         60457.00
                                     ml46                0 0Bytes       
1414131             0    in progress         60457.00
                                     ml44                0 0Bytes       
2948809             0    in progress         60457.00
                                     ml31                0 0Bytes       
1419496             0    in progress         60457.00
                                     ml25             3711 1.0GB       
2929044         48441    in progress         60457.00
                                     ml43            26180 6.6GB        
844032         11576    in progress         60457.00
                                     ml54                0 0Bytes       
1419523             5    in progress         60457.00
                                     ml45            26230 2.7GB        
732983         19163    in progress         60457.00
                                     ml40            20623 19.9GB       
1452570         38991    in progress         60457.00
                                     ml52                0 0Bytes       
2932022             0    in progress         60457.00
                                     ml48                0 0Bytes       
2918224             0    in progress         60457.00
                                     ml41                0 0Bytes       
2950754             0    in progress         60457.00
                                     ml51            27097 5.4GB        
564416          1206    in progress         60457.00

That output makes more sense, although the zeros for ml57, 29, 46, 44, 
31, 54, 52, 48 and 41 seem a bit worrisome. I couldn't find anything 
abnormal in the logs. As of this writing, ml54 is still outputting 
all-localhost nonsense.

Has anyone else encountered this issue?

Pierre-Francois




More information about the Gluster-users mailing list