[Gluster-users] replication and blanacing issues
Kiebzak, Jason M.
jk3149 at cumc.columbia.edu
Thu Dec 4 15:08:05 UTC 2014
It seems that I have two issues:
1) Data is not balanced between all bricks
2) one replication "pair" is not staying in sync
I have four servers/peers, each with one brick, all running 3.6.1. There are two volumes, each running as distributed replicated volume. Below, I've included some info. All Daemon are running. The four peers were all added at the same time.
Problem 1) for volume1, the peer1/peer2 set have 236G, while peer3 has 3.9T. Shouldn't it be split more evenly - close to 2T on each set of servers? A similar issue is seen with volume2, but the total data set (thus the diff) is not as large.
Problem 2) peer3 and peer4 should be replicated to each other. Peer1 and peer2 have identical disk usage, where as peer3 and peer4 are egregiously out of sync. Data on both peer3 and peer4 continues to grow (I am actively migrating 50T to volume 1).
`gluster volume info` give this:
Volume Name: volume1
Type: Distributed-Replicate
Volume ID: bf461760-c412-42df-9e1d-7db7f793d344
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ip1:/data/volume1
Brick2: ip2:/data/volume1
Brick3: ip3:/data/volume1
Brick4: ip4:/data/volume1
Options Reconfigured:
features.quota: on
auth.allow: serverip
Volume Name: volume2
Type: Distributed-Replicate
Volume ID: 54a8dbee-387f-4a61-9f67-3e2accb83072
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ip1:/data/volume2
Brick2: ip2:/data/volume2
Brick3: ip3:/data/volume2
Brick4: ip4:/data/volume2
Options Reconfigured:
auth.allow: serverip
If I do a `# du -h --max-dpeth=1` on each peer, I get this:
Peer1
236G /data/volume1
177G /data/volume2
Peer2
236G /data/volume1
177G /data/volume2
Peer3
3.9T /data/volume1
179G /data/volume2
Peer4
524G /data/volume1
102G /data/volume2
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141204/4c3a8834/attachment.html>
More information about the Gluster-users
mailing list