[Gluster-users] slave is rebalancing, master is not?
M S Vishwanath Bhat
msvbhat at gmail.com
Mon Jun 8 07:07:43 UTC 2015
On 5 June 2015 at 20:46, Dr. Michael J. Chudobiak <mjc at avtechpulse.com>
wrote:
> I seem to have an issue with my replicated setup.
>
> The master says no rebalancing is happening, but the slave says there is
> (sort of). The master notes the issue:
>
> [2015-06-05 15:11:26.735361] E
> [glusterd-utils.c:9993:glusterd_volume_status_aggregate_tasks_status]
> 0-management: Local tasks count (0) and remote tasks count (1) do not
> match. Not aggregating tasks status.
>
> The slave shows some odd messages like this:
> [2015-06-05 14:44:56.525402] E [glusterfsd-mgmt.c:1494:mgmt_getspec_cbk]
> 0-glusterfs: failed to get the 'volume file' from server
>
> I want the supposed rebalancing to stop, so I can add bricks.
>
> Any idea what is going on, and how to fix it?
>
> Both servers were recently upgraded from Fedora 21 to 22.
>
> Status output is below.
>
> - Mike
>
>
>
> Master: [root at karsh ~]# /usr/sbin/gluster volume status
> Status of volume: volume1
> Gluster process Port Online Pid
>
> ------------------------------------------------------------------------------
> Brick karsh:/gluster/brick1/data 49152 Y
> 4023
> Brick xena:/gluster/brick2/data 49152 Y
> 1719
> Brick karsh:/gluster/brick3/data 49153 Y
> 4015
> Brick xena:/gluster/brick4/data 49153 Y
> 1725
> NFS Server on localhost 2049 Y
> 4022
> Self-heal Daemon on localhost N/A Y
> 4034
> NFS Server on xena 2049 Y
> 24550
> Self-heal Daemon on xena N/A Y
> 24557
>
> Task Status of Volume volume1
>
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
>
> [root at xena glusterfs]# /usr/sbin/gluster volume status
> Status of volume: volume1
> Gluster process Port Online Pid
>
> ------------------------------------------------------------------------------
> Brick karsh:/gluster/brick1/data 49152 Y
> 4023
> Brick xena:/gluster/brick2/data 49152 Y
> 1719
> Brick karsh:/gluster/brick3/data 49153 Y
> 4015
> Brick xena:/gluster/brick4/data 49153 Y
> 1725
> NFS Server on localhost 2049 Y
> 24550
> Self-heal Daemon on localhost N/A Y
> 24557
> NFS Server on 192.168.0.240 2049 Y
> 4022
> Self-heal Daemon on 192.168.0.240 N/A Y
> 4034
>
> Task Status of Volume volume1
>
> ------------------------------------------------------------------------------
> Task : Rebalance
> ID : f550b485-26c4-49f8-b7dc-055c678afce8
> Status : in progress
>
> [root at xena glusterfs]# gluster volume rebalance volume1 status
> volume rebalance: volume1: success:
>
This is weird. Did you start rebalance yourself? What does "gluster volume
rebalance volume1 status" say? Also check if both the nodes are properly
connected using "gluster peer status".
If it says completed/stopped, you can go ahead and add the bricks. Also can
you check if rebalance process is running in your second server (xena?)
BTW, there is *no* master and slave in a single gluster volume :)
Best Regards,
Vishwanath
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150608/1a43145b/attachment.html>
More information about the Gluster-users
mailing list