[Gluster-users] add bricks on distributed replicated volume failed

Mohit Anchlia mohitanchlia at gmail.com
Thu Sep 1 17:11:54 UTC 2011


Can you paste the error logs from node05 and the node you are using to
perform add.

Last thing you could try is gluster peer detach node 5 and 6 and then
add them back and try again.

On Thu, Sep 1, 2011 at 9:58 AM, Laurent DOUCHY <Laurent.Douchy at unige.ch> wrote:
> ping is ok
>
> restart of gluster done :
> [root at node00 ~]# for i in `seq -w 1 10` ; do echo ; echo node$i ; echo ; ssh
> node$i "service glusterd restart";done
>
> node01
>
> Stopping glusterd:[  OK  ]
> Starting glusterd:[  OK  ]
>
> node02
>
> Stopping glusterd:[  OK  ]
> Starting glusterd:[  OK  ]
>
> node03
>
> Stopping glusterd:[  OK  ]
> Starting glusterd:[  OK  ]
>
> node04
>
> Stopping glusterd:[  OK  ]
> Starting glusterd:[  OK  ]
>
> node05
>
> Stopping glusterd:[  OK  ]
> Starting glusterd:[  OK  ]
>
> node06
>
> Stopping glusterd:[  OK  ]
> Starting glusterd:[  OK  ]
>
> node07
>
> Stopping glusterd:[  OK  ]
> Starting glusterd:[  OK  ]
>
> node08
>
> Stopping glusterd:[  OK  ]
> Starting glusterd:[  OK  ]
>
> node09
>
> Stopping glusterd:[  OK  ]
> Starting glusterd:[  OK  ]
>
> node10
>
> Stopping glusterd:[  OK  ]
> Starting glusterd:[  OK  ]
>
>
> but same error message ...
>
> [root at node07 ~]# gluster volume add-brick cluster node05:/gluster2
> node06:/gluster2
> Operation failed on node05
>
> Cheers,
> Laurent DOUCHY.
>
>
> On 9/1/11 6:54 PM, Mohit Anchlia wrote:
>>
>> Can you ping node05 from node07 from where you are trying to do the
>> add? Also, try restarting gluster process on every node and try again.
>>
>> On Thu, Sep 1, 2011 at 9:39 AM, Laurent DOUCHY<Laurent.Douchy at unige.ch>
>>  wrote:
>>>
>>> see below
>>>
>>> Cheers,
>>> Laurent DOUCHY.
>>>
>>>
>>> On 9/1/11 6:01 PM, Mohit Anchlia wrote:
>>>>
>>>> You can check few things on 5 and 6:
>>>>
>>>> 1) gluster processes are running on node5 and 6
>>>
>>> yes:
>>>
>>> node05
>>>
>>> root      4902     1  0 Aug31 ?        00:00:00
>>> /opt/glusterfs/3.2.2/sbin/glusterd
>>> root      9626     1  0 19:55 ?        00:00:00
>>> /opt/glusterfs/3.2.2/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol
>>> -p
>>> /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
>>> root      9690  9686  0 20:04 ?        00:00:00 bash -c ps -edf | grep
>>> gluster
>>> root      9704  9690  0 20:04 ?        00:00:00 grep gluster
>>>
>>> node06
>>>
>>> root      4441     1  0 Aug31 ?        00:00:00
>>> /opt/glusterfs/3.2.2/sbin/glusterd
>>> root      9178     1  0 19:55 ?        00:00:00
>>> /opt/glusterfs/3.2.2/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol
>>> -p
>>> /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
>>> root      9242  9238  0 20:04 ?        00:00:00 bash -c ps -edf | grep
>>> gluster
>>> root      9256  9242  0 20:04 ?        00:00:00 grep gluster
>>>
>>>> 2) both nodes are able to see each other
>>>
>>> yes:
>>>
>>> [root at node05 ~]# ping node06
>>> PING node06.isdc.unige.ch (129.194.168.70) 56(84) bytes of data.
>>> 64 bytes from node06.isdc.unige.ch (129.194.168.70): icmp_seq=1 ttl=64
>>> time=0.376 ms
>>>
>>> [root at node06 ~]# ping node05
>>> PING node05.isdc.unige.ch (129.194.168.69) 56(84) bytes of data.
>>> 64 bytes from node05.isdc.unige.ch (129.194.168.69): icmp_seq=1 ttl=64
>>> time=0.337 ms
>>>>
>>>> 3) do gluster peer status on both the nodes and see what you see
>>>
>>> node 5 trust node 6 and node 6 trust node 5
>>>
>>> [root at node05 ~]# gluster peer status
>>> Number of Peers: 5
>>>
>>> Hostname: node08
>>> Uuid: dd114546-5b94-4a62-9301-260703bf5707
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: node06
>>> Uuid: 3142fb9a-0a6b-46ec-9262-ede95e8f798a
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: node10
>>> Uuid: 212ce5a0-de51-4a98-9262-ae071c2d63a0
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: 129.194.168.71
>>> Uuid: a35fb0a1-af35-4a04-b38a-434f68369508
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: node09
>>> Uuid: f73fee83-8d47-4f07-bfac-b8a8592eff04
>>> State: Peer in Cluster (Connected)
>>>
>>>
>>>
>>>
>>> [root at node06 ~]# gluster peer status
>>> Number of Peers: 5
>>>
>>> Hostname: node08
>>> Uuid: dd114546-5b94-4a62-9301-260703bf5707
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: node09
>>> Uuid: f73fee83-8d47-4f07-bfac-b8a8592eff04
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: node05
>>> Uuid: 13ffcf87-6e8d-4c6b-814a-cbc14d15d88b
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: node10
>>> Uuid: 212ce5a0-de51-4a98-9262-ae071c2d63a0
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: 129.194.168.71
>>> Uuid: a35fb0a1-af35-4a04-b38a-434f68369508
>>> State: Peer in Cluster (Connected)
>>>
>>>
>>>> 4) check iptables
>>>
>>> same file on each node (the installation is manage by puppet)
>>>>
>>>> On Thu, Sep 1, 2011 at 8:57 AM, Laurent DOUCHY<Laurent.Douchy at unige.ch>
>>>>  wrote:
>>>>>
>>>>> It works ...
>>>>>
>>>>> [root at node07 ~]# gluster volume add-brick cluster node09:/gluster3
>>>>> node10:/gluster3
>>>>> Add Brick successful
>>>>>
>>>>>
>>>>> On 9/1/11 5:39 PM, Mohit Anchlia wrote:
>>>>>>
>>>>>> Can you try with node09:/gluster3 and node10:gluster3 instead?
>>>>>>
>>>>>> On Thu, Sep 1, 2011 at 2:49 AM, Laurent
>>>>>> DOUCHY<Laurent.Douchy at unige.ch>
>>>>>>  wrote:
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I work on the node7 so it doesn't appear on the list.
>>>>>>>
>>>>>>> I create a folder /gluster3 on node5 and node6 and try to add them to
>>>>>>> my
>>>>>>> volume but it failed with the same message :(
>>>>>>>
>>>>>>> [root at node07 ~]# gluster volume add-brick cluster node05:/gluster3
>>>>>>> node06:/gluster3
>>>>>>> Operation failed on node05
>>>>>>>
>>>>>>> next step is to reinstall from scratch the node I hope I can avoid
>>>>>>> this.
>>>>>>>
>>>>>>> On 8/31/11 9:08 PM, Mohit Anchlia wrote:
>>>>>>>>
>>>>>>>> I don't see node07 in above output of gluster peer status.
>>>>>>>>
>>>>>>>> Can you try to add bricks on the hosts that gluster1, gluster2? So
>>>>>>>> add
>>>>>>>> gluster3 and see if that works.
>>>>>>>>
>>>>>>>> On Wed, Aug 31, 2011 at 11:56 AM, Laurent DOUCHY
>>>>>>>> <Laurent.Douchy at unige.ch>        wrote:
>>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> I try to add 2 bricks or 4 bricks for the same effect.
>>>>>>>>>
>>>>>>>>> I try to reinstall gluster without success.
>>>>>>>>>
>>>>>>>>> Cheers,
>>>>>>>>> Laurent DOUCHY.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 8/31/11 8:07 PM, Burnash, James wrote:
>>>>>>>>>>
>>>>>>>>>> Hi Laurent.
>>>>>>>>>>
>>>>>>>>>> Since your configuration specifies replication, you must add
>>>>>>>>>> bricks
>>>>>>>>>> in
>>>>>>>>>> the
>>>>>>>>>> same number as your number of replicas.
>>>>>>>>>>
>>>>>>>>>> For instance - if you have 2 replicas (most normal case), you
>>>>>>>>>> would
>>>>>>>>>> need
>>>>>>>>>> to do something like this:
>>>>>>>>>>
>>>>>>>>>> gluster volume add-brick cluster node05:/gluster1 node06:/gluster1
>>>>>>>>>>
>>>>>>>>>> James Burnash
>>>>>>>>>> Unix Engineer
>>>>>>>>>> Knight Capital Group
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> -----Original Message-----
>>>>>>>>>> From: gluster-users-bounces at gluster.org
>>>>>>>>>> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Laurent
>>>>>>>>>> DOUCHY
>>>>>>>>>> Sent: Wednesday, August 31, 2011 12:49 PM
>>>>>>>>>> To: gluster-users at gluster.org
>>>>>>>>>> Subject: [Gluster-users] add bricks on distributed replicated
>>>>>>>>>> volume
>>>>>>>>>> failed
>>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> I'm using gluster 3.2.2 on 10 nodes. Each node have 2x2 TB disk
>>>>>>>>>> for
>>>>>>>>>> gluster.
>>>>>>>>>>
>>>>>>>>>> I manage to configure a distributed and replicated volume on 4
>>>>>>>>>> nodes
>>>>>>>>>> :
>>>>>>>>>>
>>>>>>>>>> [root at node07 ~]# gluster volume info cluster
>>>>>>>>>>
>>>>>>>>>> Volume Name: cluster
>>>>>>>>>> Type: Distributed-Replicate
>>>>>>>>>> Status: Started
>>>>>>>>>> Number of Bricks: 4 x 2 = 8
>>>>>>>>>> Transport-type: tcp
>>>>>>>>>> Bricks:
>>>>>>>>>> Brick1: node09:/gluster1
>>>>>>>>>> Brick2: node10:/gluster1
>>>>>>>>>> Brick3: node09:/gluster2
>>>>>>>>>> Brick4: node10:/gluster2
>>>>>>>>>> Brick5: node07:/gluster1
>>>>>>>>>> Brick6: node08:/gluster1
>>>>>>>>>> Brick7: node07:/gluster2
>>>>>>>>>> Brick8: node08:/gluster2
>>>>>>>>>>
>>>>>>>>>> But I can't add new nodes to this volume
>>>>>>>>>>
>>>>>>>>>> [root at node07 ~]# gluster peer status
>>>>>>>>>> Number of Peers: 5
>>>>>>>>>>
>>>>>>>>>> Hostname: node10
>>>>>>>>>> Uuid: 212ce5a0-de51-4a98-9262-ae071c2d63a0
>>>>>>>>>> State: Peer in Cluster (Connected)
>>>>>>>>>>
>>>>>>>>>> Hostname: node08
>>>>>>>>>> Uuid: dd114546-5b94-4a62-9301-260703bf5707
>>>>>>>>>> State: Peer in Cluster (Connected)
>>>>>>>>>>
>>>>>>>>>> Hostname: node09
>>>>>>>>>> Uuid: f73fee83-8d47-4f07-bfac-b8a8592eff04
>>>>>>>>>> State: Peer in Cluster (Connected)
>>>>>>>>>>
>>>>>>>>>> Hostname: node06
>>>>>>>>>> Uuid: 3142fb9a-0a6b-46ec-9262-ede95e8f798a
>>>>>>>>>> State: Peer in Cluster (Connected)
>>>>>>>>>>
>>>>>>>>>> Hostname: node05
>>>>>>>>>> Uuid: 13ffcf87-6e8d-4c6b-814a-cbc14d15d88b
>>>>>>>>>> State: Peer in Cluster (Connected)
>>>>>>>>>> [root at node07 ~]# gluster volume add-brick cluster node05:/gluster1
>>>>>>>>>> node06:/gluster1 node05:/gluster2 node06:/gluster2 Operation
>>>>>>>>>> failed
>>>>>>>>>> on
>>>>>>>>>> node05
>>>>>>>>>>
>>>>>>>>>> I try to detach nodes 5 and 6, restart glusterd do the probe and
>>>>>>>>>> the
>>>>>>>>>> add-brick but still nothing ...
>>>>>>>>>>
>>>>>>>>>> Did some one have any idea to fix this ?
>>>>>>>>>>
>>>>>>>>>> Thanks in advance,
>>>>>>>>>> Laurent.
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Gluster-users mailing list
>>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> DISCLAIMER:
>>>>>>>>>> This e-mail, and any attachments thereto, is intended only for use
>>>>>>>>>> by
>>>>>>>>>> the
>>>>>>>>>> addressee(s) named herein and may contain legally privileged
>>>>>>>>>> and/or
>>>>>>>>>> confidential information. If you are not the intended recipient of
>>>>>>>>>> this
>>>>>>>>>> e-mail, you are hereby notified that any dissemination,
>>>>>>>>>> distribution
>>>>>>>>>> or
>>>>>>>>>> copying of this e-mail, and any attachments thereto, is strictly
>>>>>>>>>> prohibited.
>>>>>>>>>> If you have received this in error, please immediately notify me
>>>>>>>>>> and
>>>>>>>>>> permanently delete the original and any copy of any e-mail and any
>>>>>>>>>> printout
>>>>>>>>>> thereof. E-mail transmission cannot be guaranteed to be secure or
>>>>>>>>>> error-free. The sender therefore does not accept liability for any
>>>>>>>>>> errors or
>>>>>>>>>> omissions in the contents of this message which arise as a result
>>>>>>>>>> of
>>>>>>>>>> e-mail
>>>>>>>>>> transmission.
>>>>>>>>>> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group
>>>>>>>>>> may,
>>>>>>>>>> at
>>>>>>>>>> its discretion, monitor and review the content of all e-mail
>>>>>>>>>> communications.
>>>>>>>>>> http://www.knight.com
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Gluster-users mailing list
>>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Gluster-users mailing list
>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>>>>>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



More information about the Gluster-users mailing list