[Gluster-users] add bricks on distributed replicated volume failed

Mohit Anchlia mohitanchlia at gmail.com
Thu Sep 1 18:34:21 UTC 2011


Can you also list the files in /etc/glusterd/peers on node 07 and node
05? Can you check if you are missing for node07 on node05?

You could also try and do detach again and remove everything from
peers and re-do steps and see if that helps.

On Thu, Sep 1, 2011 at 11:01 AM, Laurent DOUCHY <Laurent.Douchy at unige.ch> wrote:
> see below.
>
> Cheers,
> Laurent DOUCHY.
>
> System Administrator
> ISDC Data Centre for Astrophysics
> 16, ch. d'Ecogia
> CH-1290 VERSOIX
>
> Tel.: +41 (0)22 379 21 31
>
>
> On 9/1/11 7:11 PM, Mohit Anchlia wrote:
>>
>> Can you paste the error logs from node05 and the node you are using to
>> perform add.
>
> the one I'm using to perform the add:
> [2011-09-01 19:44:13.903186] I
> [glusterd-handler.c:1305:glusterd_handle_add_brick] 0-glusterd: Received add
> brick req
> [2011-09-01 19:44:13.903274] I [glusterd-utils.c:243:glusterd_lock]
> 0-glusterd: Cluster lock held by a35fb0a1-af35-4a04-b38a-434f68369508
> [2011-09-01 19:44:13.903284] I
> [glusterd-handler.c:420:glusterd_op_txn_begin] 0-glusterd: Acquired local
> lock
> [2011-09-01 19:44:13.903795] I
> [glusterd-rpc-ops.c:742:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received
> ACC from uuid: dd114546-5b94-4a62-9301-260703bf5707
> [2011-09-01 19:44:13.903834] I
> [glusterd-rpc-ops.c:742:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received
> ACC from uuid: f73fee83-8d47-4f07-bfac-b8a8592eff04
> [2011-09-01 19:44:13.904069] I
> [glusterd-rpc-ops.c:742:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received
> ACC from uuid: 3142fb9a-0a6b-46ec-9262-ede95e8f798a
> [2011-09-01 19:44:13.904167] I
> [glusterd-rpc-ops.c:742:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received
> ACC from uuid: 212ce5a0-de51-4a98-9262-ae071c2d63a0
> [2011-09-01 19:44:13.904188] I
> [glusterd-rpc-ops.c:742:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received
> ACC from uuid: 13ffcf87-6e8d-4c6b-814a-cbc14d15d88b
> [2011-09-01 19:44:13.904214] I
> [glusterd-utils.c:776:glusterd_volume_brickinfo_get_by_brick] 0-: brick:
> node05:/gluster2
> [2011-09-01 19:44:13.904231] I
> [glusterd-utils.c:776:glusterd_volume_brickinfo_get_by_brick] 0-: brick:
> node06:/gluster2
> [2011-09-01 19:44:13.904305] I
> [glusterd-op-sm.c:6453:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req
> to 5 peers
> [2011-09-01 19:44:13.904670] I
> [glusterd-rpc-ops.c:1040:glusterd3_1_stage_op_cbk] 0-glusterd: Received ACC
> from uuid: dd114546-5b94-4a62-9301-260703bf5707
> [2011-09-01 19:44:13.904703] I
> [glusterd-rpc-ops.c:1040:glusterd3_1_stage_op_cbk] 0-glusterd: Received ACC
> from uuid: f73fee83-8d47-4f07-bfac-b8a8592eff04
> [2011-09-01 19:44:13.904725] I
> [glusterd-rpc-ops.c:1040:glusterd3_1_stage_op_cbk] 0-glusterd: Received ACC
> from uuid: 212ce5a0-de51-4a98-9262-ae071c2d63a0
> [2011-09-01 19:44:13.905542] I
> [glusterd-rpc-ops.c:1040:glusterd3_1_stage_op_cbk] 0-glusterd: Received ACC
> from uuid: 3142fb9a-0a6b-46ec-9262-ede95e8f798a
> [2011-09-01 19:44:13.905655] I
> [glusterd-rpc-ops.c:1040:glusterd3_1_stage_op_cbk] 0-glusterd: Received RJT
> from uuid: 13ffcf87-6e8d-4c6b-814a-cbc14d15d88b
> [2011-09-01 19:44:13.905929] I
> [glusterd-rpc-ops.c:801:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received
> ACC from uuid: 212ce5a0-de51-4a98-9262-ae071c2d63a0
> [2011-09-01 19:44:13.905988] I
> [glusterd-rpc-ops.c:801:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received
> ACC from uuid: dd114546-5b94-4a62-9301-260703bf5707
> [2011-09-01 19:44:13.906019] I
> [glusterd-rpc-ops.c:801:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received
> ACC from uuid: 3142fb9a-0a6b-46ec-9262-ede95e8f798a
> [2011-09-01 19:44:13.906039] I
> [glusterd-rpc-ops.c:801:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received
> ACC from uuid: 13ffcf87-6e8d-4c6b-814a-cbc14d15d88b
> [2011-09-01 19:44:13.906058] I
> [glusterd-rpc-ops.c:801:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received
> ACC from uuid: f73fee83-8d47-4f07-bfac-b8a8592eff04
> [2011-09-01 19:44:13.906070] I
> [glusterd-op-sm.c:6987:glusterd_op_txn_complete] 0-glusterd: Cleared local
> lock
> [2011-09-01 19:44:13.906677] W [socket.c:1494:__socket_proto_state_machine]
> 0-socket.management: reading from socket failed. Error (Transport endpoint
> is not connected), peer (127.0.0.1:1016)
>
> node05:
> [2011-09-01 20:57:03.419202] I
> [glusterd-handler.c:448:glusterd_handle_cluster_lock] 0-glusterd: Received
> LOCK from uuid: a35fb0a1-af35-4a04-b38a-434f68369508
> [2011-09-01 20:57:03.419292] I [glusterd-utils.c:243:glusterd_lock]
> 0-glusterd: Cluster lock held by a35fb0a1-af35-4a04-b38a-434f68369508
> [2011-09-01 20:57:03.419320] I
> [glusterd-handler.c:2641:glusterd_op_lock_send_resp] 0-glusterd: Responded,
> ret: 0
> [2011-09-01 20:57:03.419818] I
> [glusterd-handler.c:488:glusterd_req_ctx_create] 0-glusterd: Received op
> from uuid: a35fb0a1-af35-4a04-b38a-434f68369508
> [2011-09-01 20:57:03.419896] I
> [glusterd-utils.c:776:glusterd_volume_brickinfo_get_by_brick] 0-: brick:
> node05:/gluster2
> [2011-09-01 20:57:03.420141] E
> [glusterd-op-sm.c:715:glusterd_op_stage_add_brick] 0-glusterd: resolve brick
> failed
> [2011-09-01 20:57:03.420157] E
> [glusterd-op-sm.c:7107:glusterd_op_ac_stage_op] 0-: Validate failed: 1
> [2011-09-01 20:57:03.420183] I
> [glusterd-handler.c:2733:glusterd_op_stage_send_resp] 0-glusterd: Responded
> to stage, ret: 0
> [2011-09-01 20:57:03.420508] I
> [glusterd-handler.c:2683:glusterd_handle_cluster_unlock] 0-glusterd:
> Received UNLOCK from uuid: a35fb0a1-af35-4a04-b38a-434f68369508
> [2011-09-01 20:57:03.420553] I
> [glusterd-handler.c:2661:glusterd_op_unlock_send_resp] 0-glusterd: Responded
> to unlock, ret: 0
> [2011-09-01 21:43:43.584850] I
> [glusterd-handler.c:448:glusterd_handle_cluster_lock] 0-glusterd: Received
> LOCK from uuid: a35fb0a1-af35-4a04-b38a-434f68369508
> [2011-09-01 21:43:43.585045] I [glusterd-utils.c:243:glusterd_lock]
> 0-glusterd: Cluster lock held by a35fb0a1-af35-4a04-b38a-434f68369508
> [2011-09-01 21:43:43.585077] I
> [glusterd-handler.c:2641:glusterd_op_lock_send_resp] 0-glusterd: Responded,
> ret: 0
> [2011-09-01 21:43:43.585341] I
> [glusterd-handler.c:488:glusterd_req_ctx_create] 0-glusterd: Received op
> from uuid: a35fb0a1-af35-4a04-b38a-434f68369508
> [2011-09-01 21:43:43.585424] I
> [glusterd-utils.c:776:glusterd_volume_brickinfo_get_by_brick] 0-: brick:
> node05:/gluster2
> [2011-09-01 21:43:43.586524] E
> [glusterd-op-sm.c:715:glusterd_op_stage_add_brick] 0-glusterd: resolve brick
> failed
> [2011-09-01 21:43:43.586540] E
> [glusterd-op-sm.c:7107:glusterd_op_ac_stage_op] 0-: Validate failed: 1
> [2011-09-01 21:43:43.586567] I
> [glusterd-handler.c:2733:glusterd_op_stage_send_resp] 0-glusterd: Responded
> to stage, ret: 0
> [2011-09-01 21:43:43.586873] I
> [glusterd-handler.c:2683:glusterd_handle_cluster_unlock] 0-glusterd:
> Received UNLOCK from uuid: a35fb0a1-af35-4a04-b38a-434f68369508
> [2011-09-01 21:43:43.586910] I
> [glusterd-handler.c:2661:glusterd_op_unlock_send_resp] 0-glusterd: Responded
> to unlock, ret: 0
>
> I have to fix the time issues.
>
> I fix the time issues but steel the same message
> [root at node07 ~]# gluster volume add-brick cluster node05:/gluster2
> node06:/gluster2
> Operation failed on node05
>>
>> Last thing you could try is gluster peer detach node 5 and 6 and then
>> add them back and try again.
>
> I've done the detach, restart glusterd, probe, check status, restart
> glusterd, check status. everything ok ... but:
>
> [root at node07 ~]# gluster volume add-brick cluster node05:/gluster2
> node06:/gluster2
> Operation failed on node05
>>
>> On Thu, Sep 1, 2011 at 9:58 AM, Laurent DOUCHY<Laurent.Douchy at unige.ch>
>>  wrote:
>>>
>>> ping is ok
>>>
>>> restart of gluster done :
>>> [root at node00 ~]# for i in `seq -w 1 10` ; do echo ; echo node$i ; echo ;
>>> ssh
>>> node$i "service glusterd restart";done
>>>
>>> node01
>>>
>>> Stopping glusterd:[  OK  ]
>>> Starting glusterd:[  OK  ]
>>>
>>> node02
>>>
>>> Stopping glusterd:[  OK  ]
>>> Starting glusterd:[  OK  ]
>>>
>>> node03
>>>
>>> Stopping glusterd:[  OK  ]
>>> Starting glusterd:[  OK  ]
>>>
>>> node04
>>>
>>> Stopping glusterd:[  OK  ]
>>> Starting glusterd:[  OK  ]
>>>
>>> node05
>>>
>>> Stopping glusterd:[  OK  ]
>>> Starting glusterd:[  OK  ]
>>>
>>> node06
>>>
>>> Stopping glusterd:[  OK  ]
>>> Starting glusterd:[  OK  ]
>>>
>>> node07
>>>
>>> Stopping glusterd:[  OK  ]
>>> Starting glusterd:[  OK  ]
>>>
>>> node08
>>>
>>> Stopping glusterd:[  OK  ]
>>> Starting glusterd:[  OK  ]
>>>
>>> node09
>>>
>>> Stopping glusterd:[  OK  ]
>>> Starting glusterd:[  OK  ]
>>>
>>> node10
>>>
>>> Stopping glusterd:[  OK  ]
>>> Starting glusterd:[  OK  ]
>>>
>>>
>>> but same error message ...
>>>
>>> [root at node07 ~]# gluster volume add-brick cluster node05:/gluster2
>>> node06:/gluster2
>>> Operation failed on node05
>>>
>>> Cheers,
>>> Laurent DOUCHY.
>>>
>>>
>>> On 9/1/11 6:54 PM, Mohit Anchlia wrote:
>>>>
>>>> Can you ping node05 from node07 from where you are trying to do the
>>>> add? Also, try restarting gluster process on every node and try again.
>>>>
>>>> On Thu, Sep 1, 2011 at 9:39 AM, Laurent DOUCHY<Laurent.Douchy at unige.ch>
>>>>  wrote:
>>>>>
>>>>> see below
>>>>>
>>>>> Cheers,
>>>>> Laurent DOUCHY.
>>>>>
>>>>>
>>>>> On 9/1/11 6:01 PM, Mohit Anchlia wrote:
>>>>>>
>>>>>> You can check few things on 5 and 6:
>>>>>>
>>>>>> 1) gluster processes are running on node5 and 6
>>>>>
>>>>> yes:
>>>>>
>>>>> node05
>>>>>
>>>>> root      4902     1  0 Aug31 ?        00:00:00
>>>>> /opt/glusterfs/3.2.2/sbin/glusterd
>>>>> root      9626     1  0 19:55 ?        00:00:00
>>>>> /opt/glusterfs/3.2.2/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol
>>>>> -p
>>>>> /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
>>>>> root      9690  9686  0 20:04 ?        00:00:00 bash -c ps -edf | grep
>>>>> gluster
>>>>> root      9704  9690  0 20:04 ?        00:00:00 grep gluster
>>>>>
>>>>> node06
>>>>>
>>>>> root      4441     1  0 Aug31 ?        00:00:00
>>>>> /opt/glusterfs/3.2.2/sbin/glusterd
>>>>> root      9178     1  0 19:55 ?        00:00:00
>>>>> /opt/glusterfs/3.2.2/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol
>>>>> -p
>>>>> /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
>>>>> root      9242  9238  0 20:04 ?        00:00:00 bash -c ps -edf | grep
>>>>> gluster
>>>>> root      9256  9242  0 20:04 ?        00:00:00 grep gluster
>>>>>
>>>>>> 2) both nodes are able to see each other
>>>>>
>>>>> yes:
>>>>>
>>>>> [root at node05 ~]# ping node06
>>>>> PING node06.isdc.unige.ch (129.194.168.70) 56(84) bytes of data.
>>>>> 64 bytes from node06.isdc.unige.ch (129.194.168.70): icmp_seq=1 ttl=64
>>>>> time=0.376 ms
>>>>>
>>>>> [root at node06 ~]# ping node05
>>>>> PING node05.isdc.unige.ch (129.194.168.69) 56(84) bytes of data.
>>>>> 64 bytes from node05.isdc.unige.ch (129.194.168.69): icmp_seq=1 ttl=64
>>>>> time=0.337 ms
>>>>>>
>>>>>> 3) do gluster peer status on both the nodes and see what you see
>>>>>
>>>>> node 5 trust node 6 and node 6 trust node 5
>>>>>
>>>>> [root at node05 ~]# gluster peer status
>>>>> Number of Peers: 5
>>>>>
>>>>> Hostname: node08
>>>>> Uuid: dd114546-5b94-4a62-9301-260703bf5707
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>> Hostname: node06
>>>>> Uuid: 3142fb9a-0a6b-46ec-9262-ede95e8f798a
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>> Hostname: node10
>>>>> Uuid: 212ce5a0-de51-4a98-9262-ae071c2d63a0
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>> Hostname: 129.194.168.71
>>>>> Uuid: a35fb0a1-af35-4a04-b38a-434f68369508
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>> Hostname: node09
>>>>> Uuid: f73fee83-8d47-4f07-bfac-b8a8592eff04
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> [root at node06 ~]# gluster peer status
>>>>> Number of Peers: 5
>>>>>
>>>>> Hostname: node08
>>>>> Uuid: dd114546-5b94-4a62-9301-260703bf5707
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>> Hostname: node09
>>>>> Uuid: f73fee83-8d47-4f07-bfac-b8a8592eff04
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>> Hostname: node05
>>>>> Uuid: 13ffcf87-6e8d-4c6b-814a-cbc14d15d88b
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>> Hostname: node10
>>>>> Uuid: 212ce5a0-de51-4a98-9262-ae071c2d63a0
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>> Hostname: 129.194.168.71
>>>>> Uuid: a35fb0a1-af35-4a04-b38a-434f68369508
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>>
>>>>>> 4) check iptables
>>>>>
>>>>> same file on each node (the installation is manage by puppet)
>>>>>>
>>>>>> On Thu, Sep 1, 2011 at 8:57 AM, Laurent
>>>>>> DOUCHY<Laurent.Douchy at unige.ch>
>>>>>>  wrote:
>>>>>>>
>>>>>>> It works ...
>>>>>>>
>>>>>>> [root at node07 ~]# gluster volume add-brick cluster node09:/gluster3
>>>>>>> node10:/gluster3
>>>>>>> Add Brick successful
>>>>>>>
>>>>>>>
>>>>>>> On 9/1/11 5:39 PM, Mohit Anchlia wrote:
>>>>>>>>
>>>>>>>> Can you try with node09:/gluster3 and node10:gluster3 instead?
>>>>>>>>
>>>>>>>> On Thu, Sep 1, 2011 at 2:49 AM, Laurent
>>>>>>>> DOUCHY<Laurent.Douchy at unige.ch>
>>>>>>>>  wrote:
>>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> I work on the node7 so it doesn't appear on the list.
>>>>>>>>>
>>>>>>>>> I create a folder /gluster3 on node5 and node6 and try to add them
>>>>>>>>> to
>>>>>>>>> my
>>>>>>>>> volume but it failed with the same message :(
>>>>>>>>>
>>>>>>>>> [root at node07 ~]# gluster volume add-brick cluster node05:/gluster3
>>>>>>>>> node06:/gluster3
>>>>>>>>> Operation failed on node05
>>>>>>>>>
>>>>>>>>> next step is to reinstall from scratch the node I hope I can avoid
>>>>>>>>> this.
>>>>>>>>>
>>>>>>>>> On 8/31/11 9:08 PM, Mohit Anchlia wrote:
>>>>>>>>>>
>>>>>>>>>> I don't see node07 in above output of gluster peer status.
>>>>>>>>>>
>>>>>>>>>> Can you try to add bricks on the hosts that gluster1, gluster2? So
>>>>>>>>>> add
>>>>>>>>>> gluster3 and see if that works.
>>>>>>>>>>
>>>>>>>>>> On Wed, Aug 31, 2011 at 11:56 AM, Laurent DOUCHY
>>>>>>>>>> <Laurent.Douchy at unige.ch>          wrote:
>>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> I try to add 2 bricks or 4 bricks for the same effect.
>>>>>>>>>>>
>>>>>>>>>>> I try to reinstall gluster without success.
>>>>>>>>>>>
>>>>>>>>>>> Cheers,
>>>>>>>>>>> Laurent DOUCHY.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 8/31/11 8:07 PM, Burnash, James wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Hi Laurent.
>>>>>>>>>>>>
>>>>>>>>>>>> Since your configuration specifies replication, you must add
>>>>>>>>>>>> bricks
>>>>>>>>>>>> in
>>>>>>>>>>>> the
>>>>>>>>>>>> same number as your number of replicas.
>>>>>>>>>>>>
>>>>>>>>>>>> For instance - if you have 2 replicas (most normal case), you
>>>>>>>>>>>> would
>>>>>>>>>>>> need
>>>>>>>>>>>> to do something like this:
>>>>>>>>>>>>
>>>>>>>>>>>> gluster volume add-brick cluster node05:/gluster1
>>>>>>>>>>>> node06:/gluster1
>>>>>>>>>>>>
>>>>>>>>>>>> James Burnash
>>>>>>>>>>>> Unix Engineer
>>>>>>>>>>>> Knight Capital Group
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>> From: gluster-users-bounces at gluster.org
>>>>>>>>>>>> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Laurent
>>>>>>>>>>>> DOUCHY
>>>>>>>>>>>> Sent: Wednesday, August 31, 2011 12:49 PM
>>>>>>>>>>>> To: gluster-users at gluster.org
>>>>>>>>>>>> Subject: [Gluster-users] add bricks on distributed replicated
>>>>>>>>>>>> volume
>>>>>>>>>>>> failed
>>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> I'm using gluster 3.2.2 on 10 nodes. Each node have 2x2 TB disk
>>>>>>>>>>>> for
>>>>>>>>>>>> gluster.
>>>>>>>>>>>>
>>>>>>>>>>>> I manage to configure a distributed and replicated volume on 4
>>>>>>>>>>>> nodes
>>>>>>>>>>>> :
>>>>>>>>>>>>
>>>>>>>>>>>> [root at node07 ~]# gluster volume info cluster
>>>>>>>>>>>>
>>>>>>>>>>>> Volume Name: cluster
>>>>>>>>>>>> Type: Distributed-Replicate
>>>>>>>>>>>> Status: Started
>>>>>>>>>>>> Number of Bricks: 4 x 2 = 8
>>>>>>>>>>>> Transport-type: tcp
>>>>>>>>>>>> Bricks:
>>>>>>>>>>>> Brick1: node09:/gluster1
>>>>>>>>>>>> Brick2: node10:/gluster1
>>>>>>>>>>>> Brick3: node09:/gluster2
>>>>>>>>>>>> Brick4: node10:/gluster2
>>>>>>>>>>>> Brick5: node07:/gluster1
>>>>>>>>>>>> Brick6: node08:/gluster1
>>>>>>>>>>>> Brick7: node07:/gluster2
>>>>>>>>>>>> Brick8: node08:/gluster2
>>>>>>>>>>>>
>>>>>>>>>>>> But I can't add new nodes to this volume
>>>>>>>>>>>>
>>>>>>>>>>>> [root at node07 ~]# gluster peer status
>>>>>>>>>>>> Number of Peers: 5
>>>>>>>>>>>>
>>>>>>>>>>>> Hostname: node10
>>>>>>>>>>>> Uuid: 212ce5a0-de51-4a98-9262-ae071c2d63a0
>>>>>>>>>>>> State: Peer in Cluster (Connected)
>>>>>>>>>>>>
>>>>>>>>>>>> Hostname: node08
>>>>>>>>>>>> Uuid: dd114546-5b94-4a62-9301-260703bf5707
>>>>>>>>>>>> State: Peer in Cluster (Connected)
>>>>>>>>>>>>
>>>>>>>>>>>> Hostname: node09
>>>>>>>>>>>> Uuid: f73fee83-8d47-4f07-bfac-b8a8592eff04
>>>>>>>>>>>> State: Peer in Cluster (Connected)
>>>>>>>>>>>>
>>>>>>>>>>>> Hostname: node06
>>>>>>>>>>>> Uuid: 3142fb9a-0a6b-46ec-9262-ede95e8f798a
>>>>>>>>>>>> State: Peer in Cluster (Connected)
>>>>>>>>>>>>
>>>>>>>>>>>> Hostname: node05
>>>>>>>>>>>> Uuid: 13ffcf87-6e8d-4c6b-814a-cbc14d15d88b
>>>>>>>>>>>> State: Peer in Cluster (Connected)
>>>>>>>>>>>> [root at node07 ~]# gluster volume add-brick cluster
>>>>>>>>>>>> node05:/gluster1
>>>>>>>>>>>> node06:/gluster1 node05:/gluster2 node06:/gluster2 Operation
>>>>>>>>>>>> failed
>>>>>>>>>>>> on
>>>>>>>>>>>> node05
>>>>>>>>>>>>
>>>>>>>>>>>> I try to detach nodes 5 and 6, restart glusterd do the probe and
>>>>>>>>>>>> the
>>>>>>>>>>>> add-brick but still nothing ...
>>>>>>>>>>>>
>>>>>>>>>>>> Did some one have any idea to fix this ?
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks in advance,
>>>>>>>>>>>> Laurent.
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Gluster-users mailing list
>>>>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> DISCLAIMER:
>>>>>>>>>>>> This e-mail, and any attachments thereto, is intended only for
>>>>>>>>>>>> use
>>>>>>>>>>>> by
>>>>>>>>>>>> the
>>>>>>>>>>>> addressee(s) named herein and may contain legally privileged
>>>>>>>>>>>> and/or
>>>>>>>>>>>> confidential information. If you are not the intended recipient
>>>>>>>>>>>> of
>>>>>>>>>>>> this
>>>>>>>>>>>> e-mail, you are hereby notified that any dissemination,
>>>>>>>>>>>> distribution
>>>>>>>>>>>> or
>>>>>>>>>>>> copying of this e-mail, and any attachments thereto, is strictly
>>>>>>>>>>>> prohibited.
>>>>>>>>>>>> If you have received this in error, please immediately notify me
>>>>>>>>>>>> and
>>>>>>>>>>>> permanently delete the original and any copy of any e-mail and
>>>>>>>>>>>> any
>>>>>>>>>>>> printout
>>>>>>>>>>>> thereof. E-mail transmission cannot be guaranteed to be secure
>>>>>>>>>>>> or
>>>>>>>>>>>> error-free. The sender therefore does not accept liability for
>>>>>>>>>>>> any
>>>>>>>>>>>> errors or
>>>>>>>>>>>> omissions in the contents of this message which arise as a
>>>>>>>>>>>> result
>>>>>>>>>>>> of
>>>>>>>>>>>> e-mail
>>>>>>>>>>>> transmission.
>>>>>>>>>>>> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital
>>>>>>>>>>>> Group
>>>>>>>>>>>> may,
>>>>>>>>>>>> at
>>>>>>>>>>>> its discretion, monitor and review the content of all e-mail
>>>>>>>>>>>> communications.
>>>>>>>>>>>> http://www.knight.com
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Gluster-users mailing list
>>>>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Gluster-users mailing list
>>>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>>>>>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



More information about the Gluster-users mailing list