[Gluster-users] Problems repairing volume

Kevin Bridges kevin at cyberswat.com
Fri Jul 17 14:15:01 UTC 2015


I lost one of my bricks and attempted to rebuild it.  I did not understand
what I was doing and created a mess.  I'm looking for guidance so that I
don't create a bigger mess.

I believe that the gluster mount is relatively simple.  It's a 2 brick
replicated volume (gluster01 & gluster02).  I lost gluster02 and attempted
to replace it.  Now that it is replaced, the files do not seem to match
what is on the brick that I did not loose.  I would like to repair these
bricks and then add more storage capacity to the volume.

Below is the output of the `df -h`, `gluster peer status`, and `gluster
volume info` for each of the servers.  I'm concerned by the `gluster peer
status` and `gluster volume rebalance nmd status` commands.

Any help is vastly appreciated.

glusterfs 3.7.2 built on Jun 23 2015 12:13:13

[root at gluster01 /]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvde       7.9G  2.3G  5.3G  30% /
tmpfs           3.7G     0  3.7G   0% /dev/shm
/dev/xvdf1       63G   38G   23G  62% /srv/sdb1
[root at gluster01 /]# gluster peer status
Number of Peers: 1

Hostname: 10.0.2.85
Uuid: 5f75bd77-0faf-4fb8-9819-83326c4f77f7
State: Peer in Cluster (Connected)
Other names:
gluster02.newmediadenver.com
[root at gluster01 /]# gluster volume info all

Volume Name: nmd
Type: Replicate
Volume ID: 62bec597-b479-4bfd-88dc-44f5bb88d737
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster01.newmediadenver.com:/srv/sdb1/nmd
Brick2: gluster02.newmediadenver.com:/srv/sdb1/nmd
Options Reconfigured:
performance.readdir-ahead: on
[root at gluster01 /]# gluster volume info nmd

Volume Name: nmd
Type: Replicate
Volume ID: 62bec597-b479-4bfd-88dc-44f5bb88d737
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster01.newmediadenver.com:/srv/sdb1/nmd
Brick2: gluster02.newmediadenver.com:/srv/sdb1/nmd
Options Reconfigured:
performance.readdir-ahead: on
[root at gluster01 /]# gluster volume rebalance nmd status
volume rebalance: nmd: failed: Volume nmd is not a distribute volume or
contains only 1 brick.
Not performing rebalance

[root at gluster02 /]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvde       7.9G  2.5G  5.1G  33% /
tmpfs           3.7G     0  3.7G   0% /dev/shm
/dev/xvdh1       63G   30G   31G  50% /srv/sdb1
[root at gluster02 /]# gluster peer status
Number of Peers: 1

Hostname: gluster01.newmediadenver.com
Uuid: afb3e1c3-de9e-4c06-ba5c-5551b1d7030e
State: Peer in Cluster (Connected)
[root at gluster02 /]# gluster volume info all

Volume Name: nmd
Type: Replicate
Volume ID: 62bec597-b479-4bfd-88dc-44f5bb88d737
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster01.newmediadenver.com:/srv/sdb1/nmd
Brick2: gluster02.newmediadenver.com:/srv/sdb1/nmd
Options Reconfigured:
performance.readdir-ahead: on
[root at gluster02 /]# gluster volume info nmd

Volume Name: nmd
Type: Replicate
Volume ID: 62bec597-b479-4bfd-88dc-44f5bb88d737
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster01.newmediadenver.com:/srv/sdb1/nmd
Brick2: gluster02.newmediadenver.com:/srv/sdb1/nmd
Options Reconfigured:
performance.readdir-ahead: on
[root at gluster02 /]# gluster volume rebalance nmd status
volume rebalance: nmd: failed: Volume nmd is not a distribute volume or
contains only 1 brick.
Not performing rebalance

Thanks,
Kevin Bridges
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150717/9edbf474/attachment.html>


More information about the Gluster-users mailing list