[Gluster-users] Help! Can't replace brick.
Tao Lin
linbaiye at gmail.com
Wed Oct 24 16:31:09 UTC 2012
I've already detached 10.67.15.27 when i found i could not get replace move
on.
2012/10/25 Tao Lin <linbaiye at gmail.com>
> Hello, glusterfs experts:
> I'v been using glusterfs-3.2.6 for moths, and it works fine.Now i'm
> facing a problem of disk(brick) full.For some resons, I have to expand
> space using replace current bricks to new bricks instead of add new bricks.
>
> It seemed okay when i used this command :
> gluster volume replace-brick volume1 10.67.15.35:/media/data1/brick1
> 10.67.15.27:/data1/brick1 start
> When i tried to use the command:
> gluster volume replace-brick volume1 10.67.15.35:/media/data1/store4appstore1
> 10.67.15.27:/data1/store4appstore1 status
> however, glusterfs reported this: replace-brick status unknown, and
> glusterd's log said:
> [2012-10-24 23:49:34.322215] E
> [glusterd-handler.c:1491:glusterd_handle_replace_brick] 0-: Unable to set
> cli op: 16
> [2012-10-24 23:50:09.606707] E
> [glusterd-handler.c:1491:glusterd_handle_replace_brick] 0-: Unable to set
> cli op: 16
> [2012-10-24 23:50:47.824502] E
> [glusterd-handler.c:1491:glusterd_handle_replace_brick] 0-: Unable to set
> cli op: 16
> [2012-10-24 23:52:12.236797] E
> [glusterd-handler.c:1491:glusterd_handle_replace_brick] 0-: Unable to set
> cli op: 16
> [2012-10-24 23:52:34.137408] E
> [glusterd-handler.c:1491:glusterd_handle_replace_brick] 0-: Unable to set
> cli op: 16
> [2012-10-24 23:54:32.98237] E
> [glusterd-handler.c:1491:glusterd_handle_replace_brick] 0-: Unable to set
> cli op: 16
> [2012-10-24 23:54:48.834254] E
> [glusterd-handler.c:1491:glusterd_handle_replace_brick] 0-: Unable to set
> cli op: 16
> [2012-10-24 23:54:59.119209] E
> [glusterd-handler.c:1491:glusterd_handle_replace_brick] 0-: Unable to set
> cli op: 16
> [2012-10-24 23:56:12.962426] E
> [glusterd-handler.c:1491:glusterd_handle_replace_brick] 0-: Unable to set
> cli op: 16
>
> I've also found a strange problem, a node had probed itself.
>
> [root at yq33 ~]# gluster peer status
> Number of Peers: 2
>
> Hostname: 10.67.15.33 <<<=== it
> shows itself.
> Uuid: c213f7ab-18c1-40e1-85c8-dd7ae97fad03
> State: Peer in Cluster (Connected)
>
> Hostname: 10.67.15.35
> Uuid: 98949cd6-2b61-4ba8-8b67-76d8b58d4ce8
> State: Peer in Cluster (Connected)
>
> But on another node, it shows correctly.
>
> [root at yq35 ~]# gluster peer status
> Number of Peers: 1
>
> Hostname: 10.67.15.33
> Uuid: c213f7ab-18c1-40e1-85c8-dd7ae97fad03
> State: Accepted peer request (Connected)
>
> Now I'm really stuck in this big trouble, can anyone help me out, please?
>
> Regards.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121025/23257cf4/attachment.html>
More information about the Gluster-users
mailing list