[Gluster-users] brick does not exist in volume
empty chai
empty.chai at gmail.com
Mon Jul 1 12:09:42 UTC 2013
hi,all
I have a gluster cluster consisting of three machines,Now I want to
move a brick onto a new machine.But it reported a mistake.
# ./sbin/gluster volume status datastores
Status of volume: datastores
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.1.1:/opt/data1 24013 Y 2137
Brick 192.168.1.2:/opt/data1 24013 Y 2657
Brick 192.168.1.3:/opt/data1 24013 Y 2014
Brick 192.168.1.1:/opt/data2 24014 Y 2143
Brick 192.168.1.2:/opt/data2 24014 Y 2663
Brick 192.168.1.3:/opt/data2 24014 Y 2020
Brick 192.168.1.1:/opt/data3 24015 Y 2149
Brick 192.168.1.2:/opt/data3 24015 Y 2669
Brick 192.168.1.3:/opt/data3 24015 Y 2026
Brick 192.168.1.1:/opt/data4 24016 Y 2155
Brick 192.168.1.2:/opt/data4 24016 Y 2675
Brick 192.168.1.3:/opt/data4 24016 Y 2032
NFS Server on localhost 38467 Y 9116
NFS Server on 192.168.1.2 38467 Y
22887
NFS Server on 192.168.1.3 38467 Y 5442
NFS Server on 192.168.1.4 38467 Y
52421
# ./sbin/gluster volume replace-brick datastores 192.168.1.3:/opt/data3
192.168.1.4:/opt/data5 start
brick: 192.168.1.3:/opt/data3 does not exist in volume: datastores
Check volume status,in addition Brick1, others have reported does not exist
in volume
# ./sbin/gluster volume status datastores 192.168.1.2:/opt/data2 mem
No brick 192.168.1.2:/opt/data2 in volume datastores
# ./sbin/gluster volume info
Volume Name: datastores
Type: Distributed-Replicate
Volume ID: 01c4d32f-90cb-458a-8e31.24819d77cd93
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 192.168.1.1:/opt/data1
Brick2: 192.168.1.2:/opt/data1
Brick3: 192.168.1.3:/opt/data1
Brick4: 192.168.1.1:/opt/data2
Brick5: 192.168.1.2:/opt/data2
Brick6: 192.168.1.3:/opt/data2
Brick7: 192.168.1.1:/opt/data3
Brick8: 192.168.1.2:/opt/data3
Brick9: 192.168.1.3:/opt/data3
Brick10: 192.168.1.1:/opt/data4
Brick11: 192.168.1.2:/opt/data4
Brick12: 192.168.1.3:/opt/data4
Options Reconfigured:
diagnostics.brick-log-level: INFO
cluster.self-heal-daemon: off
Who can help me fix this problem?
The systems were running 12.04.2 server with the 64-bit 3.5.0-34
kernels,glusterfs version 3.3.1.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130701/ea0a295e/attachment.html>
More information about the Gluster-users
mailing list