[Gluster-users] Rebalance never seems to start

Atin Mukherjee amukherj at redhat.com
Wed Mar 11 08:50:38 UTC 2015


Nithya/Susant/Raghavendra G/Shyam can answer this. Ccing them. To
analyze the issue, I would request you to attach glusterd & rebalance
logs as well.

~Atin

On 03/11/2015 01:50 PM, Jesper Led Lauridsen TS Infra server wrote:
> Hi,
> 
> I forced a rebalance on a volume yesterday, but it never seem to start. I did it for two reasons.
> 
> - One I suspected something is not right because prior to running this forced rebalance a rebalance seems to have been running forever and never ended. And when asking for status all I got was  "volume rebalance: rhevtst_dr2_g_data_01: success:". No information on files, run time etc. I ended up restarting the gluster service which resulted  in some of my RhevGuest running og this volume now fails to start.
> 
> - Two after restart og gluster service I added 4 new bricks (brick2) and wanted to test if my assumption about rebalance never ends was true.
> 
> Current status is that rebalance never seem to start, stop. Any help one what I coursing this and how to fix this, is much appreciated. I can't find anything in the logs.
> 
> Regards
> Jesper
> 
> # gluster volume rebalance rhevtst_dr2_g_data_01 status
>                                     Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
>                                ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
>                                localhost                0        0Bytes             0             0             0          in progress               0.00
>                     glustore04.net.dr.dk                0        0Bytes             0             0             0          in progress               0.00
>                     glustore03.net.dr.dk                0        0Bytes             0             0             0          in progress               0.00
>                     glustore02.net.dr.dk                0        0Bytes             0             0             0          in progress               0.00
> volume rebalance: rhevtst_dr2_g_data_01: success:
> 
> # gluster volume info rhevtst_dr2_g_data_01
> Volume Name: rhevtst_dr2_g_data_01
> Type: Distributed-Replicate
> Volume ID: c7f03606-623a-4808-91bf-71e1a77dc390
> Status: Started
> Number of Bricks: 4 x 2 = 8
> Transport-type: tcp
> Bricks:
> Brick1: glustore01.net.dr.dk:/bricks/brick1/rhevtst_dr2_g_data_01
> Brick2: glustore02.net.dr.dk:/bricks/brick1/rhevtst_dr2_g_data_01
> Brick3: glustore03.net.dr.dk:/bricks/brick1/rhevtst_dr2_g_data_01
> Brick4: glustore04.net.dr.dk:/bricks/brick1/rhevtst_dr2_g_data_01
> Brick5: glustore01.net.dr.dk:/bricks/brick2/rhevtst_dr2_g_data_01
> Brick6: glustore02.net.dr.dk:/bricks/brick2/rhevtst_dr2_g_data_01
> Brick7: glustore03.net.dr.dk:/bricks/brick2/rhevtst_dr2_g_data_01
> Brick8: glustore04.net.dr.dk:/bricks/brick2/rhevtst_dr2_g_data_01
> Options Reconfigured:
> features.quota-deem-statfs: on
> features.quota: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: none
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> auth.allow: 10.101.13.*,10.101.40.*
> user.cifs: disable
> nfs.disable: on
> network.ping-timeout: 20
> 
> # gluster volume status rhevtst_dr2_g_data_01
> Status of volume: rhevtst_dr2_g_data_01
> Gluster process                                         Port    Online  Pid
> ------------------------------------------------------------------------------
> Brick glustore01.net.dr.dk:/bricks/brick1/rhevtst_dr2_g
> _data_01                                                49154   Y       2711
> Brick glustore02.net.dr.dk:/bricks/brick1/rhevtst_dr2_g
> _data_01                                                49154   Y       2630
> Brick glustore03.net.dr.dk:/bricks/brick1/rhevtst_dr2_g
> _data_01                                                49152   Y       2766
> Brick glustore04.net.dr.dk:/bricks/brick1/rhevtst_dr2_g
> _data_01                                                49152   Y       2664
> Brick glustore01.net.dr.dk:/bricks/brick2/rhevtst_dr2_g
> _data_01                                                49155   Y       13208
> Brick glustore02.net.dr.dk:/bricks/brick2/rhevtst_dr2_g
> _data_01                                                49156   Y       35645
> Brick glustore03.net.dr.dk:/bricks/brick2/rhevtst_dr2_g
> _data_01                                                49154   Y       27491
> Brick glustore04.net.dr.dk:/bricks/brick2/rhevtst_dr2_g
> _data_01                                                49154   Y       58593
> Self-heal Daemon on localhost                           N/A     Y       13230
> Quota Daemon on localhost                               N/A     Y       34236
> Self-heal Daemon on glustore03.net.dr.dk                N/A     Y       27515
> Quota Daemon on glustore03.net.dr.dk                    N/A     Y       44608
> Self-heal Daemon on glustore04.net.dr.dk                N/A     Y       58613
> Quota Daemon on glustore04.net.dr.dk                    N/A     Y       10585
> Self-heal Daemon on glustore02.net.dr.dk                N/A     Y       36132
> Quota Daemon on glustore02.net.dr.dk                    N/A     Y       53737
> 
> Task Status of Volume rhevtst_dr2_g_data_01
> ------------------------------------------------------------------------------
> Task                 : Rebalance
> ID                   : 7a4a6099-73cd-49c8-957e-bb207cf8137e
> Status               : in progress
> 
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 

-- 
~Atin


More information about the Gluster-users mailing list