[Gluster-users] Issue in Adding/Removing the gluster node
ABHISHEK PALIWAL
abhishpaliwal at gmail.com
Fri Feb 19 11:57:21 UTC 2016
Hi Gaurav,
After the failure of add-brick following is outcome "gluster peer status"
command
Number of Peers: 2
Hostname: 10.32.1.144
Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e
State: Peer in Cluster (Connected)
Hostname: 10.32.1.144
Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e
State: Peer in Cluster (Connected)
Regards,
Abhishek
On Fri, Feb 19, 2016 at 5:21 PM, ABHISHEK PALIWAL <abhishpaliwal at gmail.com>
wrote:
> Hi Gaurav,
>
> Both are the board connect through the backplane using ethernet.
>
> Even this inconsistency also occurs when I am trying to bringing back the
> node in slot. Means some time add-brick executes without failure but some
> time following error occurs.
>
> volume add-brick c_glusterfs replica 2 10.32.1.144:/opt/lvmdir/c2/brick
> force : FAILED : Another transaction is in progress for c_glusterfs. Please
> try again after sometime.
>
>
> You can also see the attached logs for add-brick failure scenario.
>
> Please let me know if you need more logs.
>
> Regards,
> Abhishek
>
>
> On Fri, Feb 19, 2016 at 5:03 PM, Gaurav Garg <ggarg at redhat.com> wrote:
>
>> Hi Abhishek,
>>
>> How are you connecting two board, and how are you removing it manually
>> that need to know because if you are removing your 2nd board from the
>> cluster (abrupt shutdown) then you can't perform remove brick operation in
>> 2nd node from first node and its happening successfully in your case. could
>> you ensure your network connection once again while removing and bringing
>> back your node again.
>>
>> Thanks,
>> Gaurav
>>
>> ------------------------------
>> *From: *"ABHISHEK PALIWAL" <abhishpaliwal at gmail.com>
>> *To: *"Gaurav Garg" <ggarg at redhat.com>
>> *Cc: *gluster-users at gluster.org
>> *Sent: *Friday, February 19, 2016 3:36:21 PM
>>
>> *Subject: *Re: [Gluster-users] Issue in Adding/Removing the gluster node
>>
>> Hi Gaurav,
>>
>> Thanks for reply
>>
>> 1. Here, I removed the board manually here but this time it works fine
>>
>> [2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs replica 1
>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
>> [2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS
>>
>> Yes this time board is reachable but how? don't know because board is
>> detached.
>>
>> 2. Here, I attached the board this time its works fine in add-bricks
>>
>> 2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS
>> [2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs replica 2
>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
>>
>> 3.Here, again I removed the board this time failed occur
>>
>> [2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs replica 1
>> 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick
>> 10.32.1.144:/opt
>> /lvmdir/c2/brick for volume c_glusterfs
>>
>> but here board is not reachable.
>>
>> why this inconsistency is there while doing the same step multiple time.
>>
>> Hope you are getting my point.
>>
>> Regards,
>> Abhishek
>>
>> On Fri, Feb 19, 2016 at 3:25 PM, Gaurav Garg <ggarg at redhat.com> wrote:
>>
>>> Abhishek,
>>>
>>> when sometime its working fine means 2nd board network connection is
>>> reachable to first node. you can conform this by executing same #gluster
>>> peer status command.
>>>
>>> Thanks,
>>> Gaurav
>>>
>>> ----- Original Message -----
>>> From: "ABHISHEK PALIWAL" <abhishpaliwal at gmail.com>
>>> To: "Gaurav Garg" <ggarg at redhat.com>
>>> Cc: gluster-users at gluster.org
>>> Sent: Friday, February 19, 2016 3:12:22 PM
>>> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
>>>
>>> Hi Gaurav,
>>>
>>> Yes, you are right actually I am force fully detaching the node from the
>>> slave and when we removed the board it disconnected from the another
>>> board.
>>>
>>> but my question is I am doing this process multiple time some time it
>>> works
>>> fine but some time it gave these errors.
>>>
>>>
>>> you can see the following logs from cmd_history.log file
>>>
>>> [2016-02-18 10:03:34.497996] : volume set c_glusterfs nfs.disable on :
>>> SUCCESS
>>> [2016-02-18 10:03:34.915036] : volume start c_glusterfs force : SUCCESS
>>> [2016-02-18 10:03:40.250326] : volume status : SUCCESS
>>> [2016-02-18 10:03:40.273275] : volume status : SUCCESS
>>> [2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs replica 1
>>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
>>> [2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS
>>> [2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS
>>> [2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs replica 2
>>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
>>> [2016-02-18 10:30:53.297415] : volume status : SUCCESS
>>> [2016-02-18 10:30:53.313096] : volume status : SUCCESS
>>> [2016-02-18 10:37:02.748714] : volume status : SUCCESS
>>> [2016-02-18 10:37:02.762091] : volume status : SUCCESS
>>> [2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs replica 1
>>> 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick
>>> 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs
>>>
>>>
>>> On Fri, Feb 19, 2016 at 3:05 PM, Gaurav Garg <ggarg at redhat.com> wrote:
>>>
>>> > Hi Abhishek,
>>> >
>>> > Seems your peer 10.32.1.144 have disconnected while doing remove brick.
>>> > see the below logs in glusterd:
>>> >
>>> > [2016-02-18 10:37:02.816009] E [MSGID: 106256]
>>> > [glusterd-brick-ops.c:1047:__glusterd_handle_remove_brick]
>>> 0-management:
>>> > Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume
>>> c_glusterfs
>>> > [Invalid argument]
>>> > [2016-02-18 10:37:02.816061] E [MSGID: 106265]
>>> > [glusterd-brick-ops.c:1088:__glusterd_handle_remove_brick]
>>> 0-management:
>>> > Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume
>>> c_glusterfs
>>> > The message "I [MSGID: 106004]
>>> > [glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management: Peer
>>> > <10.32.1.144> (<6adf57dc-c619-4e56-ae40-90e6aef75fe9>), in state <Peer
>>> in
>>> > Cluster>, has disconnected from glusterd." repeated 25 times between
>>> > [2016-02-18 10:35:43.131945] and [2016-02-18 10:36:58.160458]
>>> >
>>> >
>>> >
>>> > If you are facing the same issue now, could you paste your # gluster
>>> peer
>>> > status command output here.
>>> >
>>> > Thanks,
>>> > ~Gaurav
>>> >
>>> > ----- Original Message -----
>>> > From: "ABHISHEK PALIWAL" <abhishpaliwal at gmail.com>
>>> > To: gluster-users at gluster.org
>>> > Sent: Friday, February 19, 2016 2:46:35 PM
>>> > Subject: [Gluster-users] Issue in Adding/Removing the gluster node
>>> >
>>> > Hi,
>>> >
>>> >
>>> > I am working on two board setup connecting to each other. Gluster
>>> version
>>> > 3.7.6 is running and added two bricks in replica 2 mode but when I
>>> manually
>>> > removed (detach) the one board from the setup I am getting the
>>> following
>>> > error.
>>> >
>>> > volume remove-brick c_glusterfs replica 1 10.32.1.144:
>>> /opt/lvmdir/c2/brick
>>> > force : FAILED : Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for
>>> > volume c_glusterfs
>>> >
>>> > Please find the logs file as an attachment.
>>> >
>>> >
>>> > Regards,
>>> > Abhishek
>>> >
>>> >
>>> > _______________________________________________
>>> > Gluster-users mailing list
>>> > Gluster-users at gluster.org
>>> > http://www.gluster.org/mailman/listinfo/gluster-users
>>> >
>>>
>>>
>>>
>>> --
>>>
>>>
>>>
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>>
>
>
>
>
--
Regards
Abhishek Paliwal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160219/3a729fe9/attachment.html>
More information about the Gluster-users
mailing list