[Bugs] [Bug 1764119] gluster rebalance status doesn't show detailed information when a node is rebooted

bugzilla at redhat.com bugzilla at redhat.com
Tue Oct 22 09:40:12 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1764119

Sanju <srakonde at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Assignee|bugs at gluster.org            |srakonde at redhat.com



--- Comment #1 from Sanju <srakonde at redhat.com> ---
Description of problem:
=================
when a rebalance is in progress, we can see the detailed info as below
[root at rhs-gp-srv11 glusterfs]# gluster v rebal ctime-distrep-rebal status
                                    Node Rebalanced-files          size      
scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------  
-----------   -----------   -----------         ------------     --------------
                                server2                0        0Bytes         
   0             0             0          in progress        0:00:00
                                server3              6744        73.7MB        
48580             0             0          in progress        0:04:41
                               localhost             6209        97.5MB        
45174             0             0          in progress        0:04:41
The estimated time for rebalance to complete will be unavailable for the first
10 minutes.
volume rebalance: ctime-distrep-rebal: success


However, while a node is rebooted, we are unable to see above such info, it
only displays as below
[root at rhs-gp-srv11 glusterfs]# gluster v rebal ctime-distrep-rebal status
volume rebalance: ctime-distrep-rebal: success


This is a problem if a user wants to know the exact files that have got
rebalanced and how many have failed

Version-Release number of selected component (if applicable):
=============
mainline

How reproducible:
=============
consistent

Steps to Reproduce:
1. create a 3x3 volume
2. do some IOs from client
3. issue a remove-brick to make it 2x3
4. while rebalance is happening do a reboot of one of the nodes

Actual results:
================
reblance status doesnt show detailed info

Expected results:
=============
need detailed info even if a node is rebooted

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list