[Gluster-users] <host> not in 'Peer in Cluster' state

Kaushal M kshlmster at gmail.com
Sat Feb 15 11:40:32 UTC 2014


Peer status having node1's elastic ip suggests that you probed the
other peers from node1. This would mean that the other peers don't
know of node1's hostname. Even though you've edited the hosts file on
the peers, a reverse resolution on node1s ip wouldn't return the
hostnames you've set. Gluster uses reverse resolution to match
hostnames when it doesn't have a straight match in the peer list.

To recover from this. just probe node1 from another peer. Do '#
gluster peer probe node1.ec2' from another peer. This will update
gluster's peerlist to contain the name node1.ec2. After this other
operations will continue successfully.

~kaushal

On Sat, Feb 15, 2014 at 5:23 AM, Jon Cope <jcope at redhat.com> wrote:
> Hi all,
>
> I'm attempting to create a 4 nodes cluster over EC2.  I'm fairly new to this and so may not be seeing something obvious.
>
> - Established passworldless SSH between nodes.
> - edited /etc/sysconfig/network HOSTNAME=node#.ec2 to satisfy FQDN
> - mounted xfs /dev/xvdh /mnt/brick1
> - stopped iptables
>
>
> The error I'm getting occurs when invoking the following, where <volume> is the volume name:
>
> # gluster volume create <volume> replica 2 node1.ec2:/mnt/brick1 node2.ec2:/mnt/brick1 node3.ec2:/mnt/brick1 node4.ec2:/mnt/brick1
> # volume create: <volume>: failed: Host node1.ec2 is not in 'Peer in Cluster' state
>
> Checking peer status of node1.ec2 from node{2..4}.ec2 produces the following.  Note that node1.ec2's elastic IP appears instead of the FQDN; not sure if that's relevant or not.
>
> [root at node2 ~]# gluster peer status
> Number of Peers: 3
>
> Hostname: node4.ec2
> Uuid: ab2bcdd8-2c0b-439d-b685-3be457988abc
> State: Peer in Cluster (Connected)
>
> Hostname: node3.ec2
> Uuid: 4f128213-3549-494a-af04-822b5e2f2b96
> State: Peer in Cluster (Connected)
>
> Hostname: ###.##.##.###                     #node1.ec2 elastic IP
> Uuid: 09d81803-e5e1-43b1-9faf-e94f730acc3e
> State: Peer in Cluster (Connected)
>
> The error as it appears in vim etc-glusterfs-glusterd.vol.log:
>
> [2014-02-14 23:28:44.634663] E [glusterd-utils.c:5351:glusterd_new_brick_validate] 0-management: Host node1.ec2 is not in 'Peer in Cluster' state
> [2014-02-14 23:28:44.634699] E [glusterd-volume-ops.c:795:glusterd_op_stage_create_volume] 0-management: Host node1.ec2 is not in 'Peer in Cluster' state
> [2014-02-14 23:28:44.634718] E [glusterd-syncop.c:890:gd_stage_op_phase] 0-management: Staging of operation 'Volume Create' failed on localhost : Host node1.ec2 is not in 'Peer in Cluster' state
>
> Can someone suggest possible cause of this error or point me in a viable direction?
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list