[Gluster-users] Issue in Adding/Removing the gluster node

Gaurav Garg ggarg at redhat.com
Fri Feb 19 09:55:13 UTC 2016


Abhishek,

when sometime its working fine means 2nd board network connection is reachable to first node. you can conform this by executing same #gluster peer status command.

Thanks,
Gaurav

----- Original Message -----
From: "ABHISHEK PALIWAL" <abhishpaliwal at gmail.com>
To: "Gaurav Garg" <ggarg at redhat.com>
Cc: gluster-users at gluster.org
Sent: Friday, February 19, 2016 3:12:22 PM
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node

Hi Gaurav,

Yes, you are right actually I am force fully detaching the node from the
slave and when we removed the board it disconnected from the another board.

but my question is I am doing this process multiple time some time it works
fine but some time it gave these errors.


you can see the following logs from cmd_history.log file

[2016-02-18 10:03:34.497996]  : volume set c_glusterfs nfs.disable on :
SUCCESS
[2016-02-18 10:03:34.915036]  : volume start c_glusterfs force : SUCCESS
[2016-02-18 10:03:40.250326]  : volume status : SUCCESS
[2016-02-18 10:03:40.273275]  : volume status : SUCCESS
[2016-02-18 10:03:40.601472]  : volume remove-brick c_glusterfs replica 1
10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
[2016-02-18 10:03:40.885973]  : peer detach 10.32.1.144 : SUCCESS
[2016-02-18 10:03:42.065038]  : peer probe 10.32.1.144 : SUCCESS
[2016-02-18 10:03:44.563546]  : volume add-brick c_glusterfs replica 2
10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
[2016-02-18 10:30:53.297415]  : volume status : SUCCESS
[2016-02-18 10:30:53.313096]  : volume status : SUCCESS
[2016-02-18 10:37:02.748714]  : volume status : SUCCESS
[2016-02-18 10:37:02.762091]  : volume status : SUCCESS
[2016-02-18 10:37:02.816089]  : volume remove-brick c_glusterfs replica 1
10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick
10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs


On Fri, Feb 19, 2016 at 3:05 PM, Gaurav Garg <ggarg at redhat.com> wrote:

> Hi Abhishek,
>
> Seems your peer 10.32.1.144 have disconnected while doing remove brick.
> see the below logs in glusterd:
>
> [2016-02-18 10:37:02.816009] E [MSGID: 106256]
> [glusterd-brick-ops.c:1047:__glusterd_handle_remove_brick] 0-management:
> Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs
> [Invalid argument]
> [2016-02-18 10:37:02.816061] E [MSGID: 106265]
> [glusterd-brick-ops.c:1088:__glusterd_handle_remove_brick] 0-management:
> Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs
> The message "I [MSGID: 106004]
> [glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management: Peer
> <10.32.1.144> (<6adf57dc-c619-4e56-ae40-90e6aef75fe9>), in state <Peer in
> Cluster>, has disconnected from glusterd." repeated 25 times between
> [2016-02-18 10:35:43.131945] and [2016-02-18 10:36:58.160458]
>
>
>
> If you are facing the same issue now, could you paste your # gluster peer
> status     command output here.
>
> Thanks,
> ~Gaurav
>
> ----- Original Message -----
> From: "ABHISHEK PALIWAL" <abhishpaliwal at gmail.com>
> To: gluster-users at gluster.org
> Sent: Friday, February 19, 2016 2:46:35 PM
> Subject: [Gluster-users] Issue in Adding/Removing the gluster node
>
> Hi,
>
>
> I am working on two board setup connecting to each other. Gluster version
> 3.7.6 is running and added two bricks in replica 2 mode but when I manually
> removed (detach) the one board from the setup I am getting the following
> error.
>
> volume remove-brick c_glusterfs replica 1 10.32.1.144:/opt/lvmdir/c2/brick
> force : FAILED : Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for
> volume c_glusterfs
>
> Please find the logs file as an attachment.
>
>
> Regards,
> Abhishek
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 




Regards
Abhishek Paliwal


More information about the Gluster-users mailing list