[Gluster-users] replication and blanacing issues

Kiebzak, Jason M. jk3149 at cumc.columbia.edu
Thu Dec 4 20:43:45 UTC 2014


As a follow up:

I created another replicated striped volume - two brick replica, striped against two sets of servers (four servers in all) - same config as mentioned below. I started pouring data into it, and here's my output from `du`:
   peer1 - 47G
    peer2 - 47G
    peer3 - 47G
    peer4 - 24G

Peer1 and Peer2 should be mirrored, and peer3 and peer4 should be mirrored.

This time, data seems more balanced - EXCEPT that peer4 continues to lag FAR behind - like in the example below.

Any suggestions would be appreciated.
Jason

From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Kiebzak, Jason M.
Sent: Thursday, December 04, 2014 10:08 AM
To: gluster-users at gluster.org
Subject: [Gluster-users] replication and blanacing issues

It seems that I have two issues:


1)  Data is not balanced between all bricks

2)  one replication "pair" is not staying in sync

I have four servers/peers, each with one brick, all running 3.6.1. There are two volumes, each running as distributed replicated volume. Below, I've included some info. All Daemon are running. The four peers were all added at the same time.

Problem 1) for volume1, the peer1/peer2 set have 236G, while peer3 has 3.9T. Shouldn't it be split more evenly - close to 2T on each set of servers? A similar issue is seen with volume2, but the total data set (thus the diff) is not as large.

Problem 2) peer3 and peer4 should be replicated to each other. Peer1 and peer2 have identical disk usage, where as peer3 and peer4 are egregiously out of sync. Data on both peer3 and peer4 continues to grow (I am actively migrating 50T to volume 1).


`gluster volume info` give this:
    Volume Name: volume1
    Type: Distributed-Replicate
    Volume ID: bf461760-c412-42df-9e1d-7db7f793d344
    Status: Started
    Number of Bricks: 2 x 2 = 4
    Transport-type: tcp
    Bricks:
    Brick1: ip1:/data/volume1
    Brick2: ip2:/data/volume1
    Brick3: ip3:/data/volume1
    Brick4: ip4:/data/volume1
    Options Reconfigured:
    features.quota: on
    auth.allow: serverip

    Volume Name: volume2
    Type: Distributed-Replicate
    Volume ID: 54a8dbee-387f-4a61-9f67-3e2accb83072
    Status: Started
    Number of Bricks: 2 x 2 = 4
    Transport-type: tcp
    Bricks:
    Brick1: ip1:/data/volume2
    Brick2: ip2:/data/volume2
    Brick3: ip3:/data/volume2
    Brick4: ip4:/data/volume2
    Options Reconfigured:
    auth.allow: serverip

If I do a `# du -h --max-dpeth=1` on each peer, I get this:
    Peer1
        236G    /data/volume1
        177G    /data/volume2
    Peer2
        236G    /data/volume1
        177G    /data/volume2
    Peer3
        3.9T    /data/volume1
        179G    /data/volume2
    Peer4
        524G    /data/volume1
        102G    /data/volume2
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141204/73828bd4/attachment.html>


More information about the Gluster-users mailing list