[Gluster-users] Pre Validation failed when adding bricks
Cedric Lemarchand
yipikai7 at gmail.com
Tue Dec 13 15:26:44 UTC 2016
Hello,
When I try to add 3 bricks to a working cluster composed of 3 nodes / 3 bricks in dispersed mode 2+1, it fails like this :
root at gl1:~# gluster volume add-brick vol1 gl4:/data/br1 gl5:/data/br1 gl6:/data/br1
volume add-brick: failed: Pre Validation failed on gl4. Host gl5 not connected
However all peers are connected and there aren't networking issues :
root at gl1:~# gluster peer status
Number of Peers: 5
Hostname: gl2
Uuid: 616f100f-a3f4-46e4-b161-ee5db5a60e26
State: Peer in Cluster (Connected)
Hostname: gl3
Uuid: acb828b8-f4b3-42ab-a9d2-b3e7b917dc9a
State: Peer in Cluster (Connected)
Hostname: gl4
Uuid: 813ad056-5e84-4fdb-ac13-38d24c748bc4
State: Peer in Cluster (Connected)
Hostname: gl5
Uuid: a7933aeb-b08b-4ebb-a797-b8ecbe5a03c6
State: Peer in Cluster (Connected)
Hostname: gl6
Uuid: 63c9a6c1-0adf-4cf5-af7b-b28a60911c99
State: Peer in Cluster (Connected)
:
When I try a second time, the error is different :
root at gl1:~# gluster volume add-brick vol1 gl4:/data/br1 gl5:/data/br1 gl6:/data/br1
volume add-brick: failed: Pre Validation failed on gl5. /data/br1 is already part of a volume
Pre Validation failed on gl6. /data/br1 is already part of a volume
Pre Validation failed on gl4. /data/br1 is already part of a volume
It seems the previous try, even if it has failed, have created the gluster attributes on file system as shown by attr on gl4/5/6 :
Attribute "glusterfs.volume-id" has a 16 byte value for /data/br1
I already purge gluster and reformat brick on gl4/5/6 but the issue persist, any ideas ? did I miss something ?
Some informations :
root at gl1:~# gluster volume info
Volume Name: vol1
Type: Disperse
Volume ID: bb563884-0e2a-4757-9fd5-cb851ba113c3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gl1:/data/br1
Brick2: gl2:/data/br1
Brick3: gl3:/data/br1
Options Reconfigured:
features.scrub-freq: hourly
features.scrub: Inactive
features.bitrot: off
cluster.disperse-self-heal-daemon: enable
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on, I have the following error :
root at gl1:~# gluster volume status
Status of volume: vol1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gl1:/data/br1 49152 0 Y 23403
Brick gl2:/data/br1 49152 0 Y 14545
Brick gl3:/data/br1 49152 0 Y 11348
Self-heal Daemon on localhost N/A N/A Y 24766
Self-heal Daemon on gl4 N/A N/A Y 1087
Self-heal Daemon on gl5 N/A N/A Y 1080
Self-heal Daemon on gl3 N/A N/A Y 12321
Self-heal Daemon on gl2 N/A N/A Y 15496
Self-heal Daemon on gl6 N/A N/A Y 1091
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks
root at gl1:~# gluster volume info
Volume Name: vol1
Type: Disperse
Volume ID: bb563884-0e2a-4757-9fd5-cb851ba113c3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gl1:/data/br1
Brick2: gl2:/data/br1
Brick3: gl3:/data/br1
Options Reconfigured:
features.scrub-freq: hourly
features.scrub: Inactive
features.bitrot: off
cluster.disperse-self-heal-daemon: enable
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
root at gl1:~# gluster peer status
Number of Peers: 5
Hostname: gl2
Uuid: 616f100f-a3f4-46e4-b161-ee5db5a60e26
State: Peer in Cluster (Connected)
Hostname: gl3
Uuid: acb828b8-f4b3-42ab-a9d2-b3e7b917dc9a
State: Peer in Cluster (Connected)
Hostname: gl4
Uuid: 813ad056-5e84-4fdb-ac13-38d24c748bc4
State: Peer in Cluster (Connected)
Hostname: gl5
Uuid: a7933aeb-b08b-4ebb-a797-b8ecbe5a03c6
State: Peer in Cluster (Connected)
Hostname: gl6
Uuid: 63c9a6c1-0adf-4cf5-af7b-b28a60911c99
State: Peer in Cluster (Connected)
More information about the Gluster-users
mailing list