[Gluster-users] rhs-hadoop-install Fails to create volume

Jon Cope jcope at redhat.com
Thu Feb 13 00:11:15 UTC 2014


Hi All,

I'm trying to configure a Gluster/Hadoop volume in a 4 node EC2 cluster using the automated configure process in:

rhs-hadoop-install-0_65-2.el6rhs.noarch.rpm
rhs-hadoop-2.1.6-2.noarch.rpm
command "./install /dev/SomeDevice"

I begin with 4 nodes, each with an attached and formatted EBS volume named /dev/xvhd.  Using "./install.sh /dev/SomeDevice", the script successfully:
1. creates a dir on each node called /mnt/brick1
2. uses mkfs.xfs on each device
3. mounts each filesystem to /mnt/brick1
4. edits /etc/fstab accordingly
5. probes peers listed in /usr/share/rhs-hadoop-install/hosts
6. attempts to create the volume
7. explodes (see below)

The nodes themselves appear okay.  Hostnames are all qualified...

I've run out of ideas.  Does anyone have anything?


----------------------------------------
--    Begin cluster configuration     --
----------------------------------------

-- Cleaning up (un-mounting, deleting volume, etc.)
  -- un-mounting /mnt/glusterfs on all nodes...
  -- from node node1.ec2:
       stopping HadoopVol volume...
       deleting HadoopVol volume...
  -- from node node1.ec2:
       detaching all other nodes from trusted pool...
  -- on all nodes:
       rm /mnt/glusterfs...
       umount /mnt/brick1...
       rm /mnt/brick1 and /mnt/brick1/mapredlocal...

-- Setting up brick and volume mounts, creating and starting volume
  -- on all nodes:
       mkfs.xfs /dev/xvdh...
       mkdir /mnt/brick1, /mnt/glusterfs and /mnt/brick1/mapredlocal...
       append mount entries to /etc/fstab...
       mount /mnt/brick1...
  -- from node node1.ec2:
       creating trusted pool...
       creating HadoopVol volume...
       starting HadoopVol volume...
   ERROR: Volume "HadoopVol" creation failed with error 1
          Bricks=" node1.ec2:/mnt/brick1/HadoopVol node2.ec2:/mnt/brick1/HadoopVol node3.ec2:/mnt/brick1/HadoopVol node4.ec2:/mnt/brick1/HadoopVol"

######## All nodes appear okay.

[root at node1 rhs-hadoop-install]# gluster peer status
Number of Peers: 3

Hostname: node2.ec2
Uuid: 888d8c52-dcec-42c4-96a8-e7fbf1e04de0
State: Peer in Cluster (Connected)

Hostname: node3.ec2
Uuid: 34d0c158-3021-4187-94d1-63adaa1a3a3d
State: Peer in Cluster (Connected)

Hostname: node4.ec2
Uuid: 2d9ae6c0-9dc1-4080-ab0b-dfd12e3f108e
State: Peer in Cluster (Connected)



More information about the Gluster-users mailing list