[Gluster-users] Out of space, rebalance and possibly split-brain issues

Lysa Milch Lysa.Milch at blackboard.com
Tue Mar 18 00:58:24 UTC 2014


Hello All,

I ran the rebalance on the second server while the first one is still in progress.  The status is progressing on both, but on the second server after hours of rebalancing one of the bricks is still 100% full.
The first server has distributed the volumes much better.

Is this a wait and see scenario, or has something gone wrong?
Please advise,

Thank you.

From: Lysa Milch <lysa.milch at blackboard.com<mailto:lysa.milch at blackboard.com>>
Date: Monday, March 17, 2014 3:04 PM
To: Kaushal M <kshlmster at gmail.com<mailto:kshlmster at gmail.com>>
Cc: "gluster-users at gluster.org<mailto:gluster-users at gluster.org>" <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] Out of space, rebalance and possibly split-brain issues

Great.   Given this information, is it safe/recommended to run the rebalance on the second peer while the first one is still going?

From: Kaushal M <kshlmster at gmail.com<mailto:kshlmster at gmail.com>>
Date: Tue, 18 Mar 2014 00:05:52 +0530
To: Chris Dixon <lysa.milch at blackboard.com<mailto:lysa.milch at blackboard.com>>
Cc: "gluster-users at gluster.org<mailto:gluster-users at gluster.org>" <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] Out of space, rebalance and possibly split-brain issues

Rebalance in gluster-3.2 isn't distributed. The rebalance process is
started only on the peer where the rebalance start command was run.
This is unlike, gluster-3.3 and above where rebalance is distributed
and a rebalance process is started on all peers having bricks of a
volume.

~kaushal

On Mon, Mar 17, 2014 at 11:58 PM, Lysa Milch <Lysa.Milch at blackboard.com<mailto:Lysa.Milch at blackboard.com>> wrote:
All,

This morning I noticed that although my gluster vol was only at 60%, two
individual bricks ( one on each replication server) were at 100%.  I ran a
re-balance on the first server, and am seeing ( what I hope to be ) good
progress.

/dev/xvdc                                          1.0T  399G  626G  39%
/brick1
/dev/xvdf                                          1.0T  398G  627G  39%
/brick3
/dev/xvdg                                          1.0T  299G  726G  30%
/brick5

brick5 was initially at 100%, so this is all well.

My question is, the 100% brick on my second gluster server is still at 100%
and if I run a:

volume rebalance myvol status

On the second box, no rebalance is running.

1)  Is this normal, or do I have another problem I am not aware of?
2)  Should I start a rebalance on the second box while the first is running,
or should I wait?

Thank you for any information you can share.
gluster --version
glusterfs 3.2.5 built on Jan 31 2012 07:39:59


gluster peer  status
Number of Peers: 1

Hostname: myhost.domain.com
Uuid: 00000000-0000-0000-0000-000000000000
State: Establishing Connection (Connected)


gluster volume info

Volume Name: myvol
Type: Distributed-Replicate
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: host1.domain.com:/brick1
Brick2: host2.domain.com:/brick2
Brick3: host1.domain.com:/brick3
Brick2: host2.domain.com:/brick4
Brick1: host1.domain.com:/brick5
Brick2: host2.domain.com:/brick6


_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://supercolony.gluster.org/mailman/listinfo/gluster-users



This email and any attachments may contain confidential and proprietary information of Blackboard that is for the sole use of the intended recipient. If you are not the intended recipient, disclosure, copying, re-distribution or other use of any of this information is strictly prohibited. Please immediately notify the sender and delete this transmission if you received this email in error.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140318/f8550cbf/attachment.html>


More information about the Gluster-users mailing list