[Gluster-users] Out of space, rebalance and possibly split-brain issues

Lysa Milch Lysa.Milch at blackboard.com
Mon Mar 17 18:28:28 UTC 2014


All,

This morning I noticed that although my gluster vol was only at 60%, two individual bricks ( one on each replication server) were at 100%.  I ran a re-balance on the first server, and am seeing ( what I hope to be ) good progress.

/dev/xvdc                                          1.0T  399G  626G  39% /brick1
/dev/xvdf                                          1.0T  398G  627G  39% /brick3
/dev/xvdg                                          1.0T  299G  726G  30% /brick5

brick5 was initially at 100%, so this is all well.

My question is, the 100% brick on my second gluster server is still at 100% and if I run a:

volume rebalance myvol status

On the second box, no rebalance is running.

1)  Is this normal, or do I have another problem I am not aware of?
2)  Should I start a rebalance on the second box while the first is running, or should I wait?

Thank you for any information you can share.
gluster --version
glusterfs 3.2.5 built on Jan 31 2012 07:39:59


gluster peer  status
Number of Peers: 1

Hostname: myhost.domain.com
Uuid: 00000000-0000-0000-0000-000000000000
State: Establishing Connection (Connected)


gluster volume info

Volume Name: myvol
Type: Distributed-Replicate
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: host1.domain.com:/brick1
Brick2: host2.domain.com:/brick2
Brick3: host1.domain.com:/brick3
Brick2: host2.domain.com:/brick4
Brick1: host1.domain.com:/brick5
Brick2: host2.domain.com:/brick6


This email and any attachments may contain confidential and proprietary information of Blackboard that is for the sole use of the intended recipient. If you are not the intended recipient, disclosure, copying, re-distribution or other use of any of this information is strictly prohibited. Please immediately notify the sender and delete this transmission if you received this email in error.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140317/ae5d23c4/attachment.html>


More information about the Gluster-users mailing list