[Gluster-users] Remove Brick Rebalance Hangs With No Activity

Timothy Orme torme at ancestry.com
Fri Oct 25 18:51:18 UTC 2019


Hello All,

I'm trying to remove a set of bricks from our cluster.  I've done this operation a few times now with success, but on one set of bricks, the operation starts and seems to never progress.  It just sits here:

                                   Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
             ip-10-158-10-1.ec2.internal                0        0Bytes             0             0             0          in progress        0:22:35
            ip-10-158-10-2.ec2.internal                0        0Bytes             0             0             0          in progress        0:22:35
           ip-10-158-10-3.ec2.internal                0        0Bytes             0             0             0          in progress        0:22:35
Rebalance estimated time unavailable. Please try again later.

The rebalance logs on the server don't seem to indicate any issues.  I see no error statements or anything.  The servers themselves also seem very idle.  CPU and Network Activity are stuck at near 0, where as during other removals they would spike almost immediately.

There's almost no activity in the log either.  The only thing that I've seen is a message like:

[2019-10-25 18:42:21.000753] I [MSGID: 0] [dht-rebalance.c:4309:gf_defrag_total_file_size] 0-scratch-dht: local subvol: scratch-replicate-2,cnt = 596361801728
[2019-10-25 18:42:21.000799] I [MSGID: 0] [dht-rebalance.c:4313:gf_defrag_total_file_size] 0-scratch-dht: Total size files = 596361801728
[2019-10-25 18:42:21.000808] I [dht-rebalance.c:4355:dht_file_counter_thread] 0-dht: tmp data size =596361801728

Any idea what might be happening?

Thanks,
Tim

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191025/8e4bf684/attachment.html>


More information about the Gluster-users mailing list