[Gluster-users] [Gluster-devel] Volume Create Failed

Joe Julian joe at julianfamily.org
Thu May 8 05:32:06 UTC 2014


"netstat -tlnp" is a useful command to know. That shows what tcp ports 
are listening and the pids and command names of those processes.

More specifically to gluster, "gluster volume status" will show what 
ports each brick is listening on.

"@ports" from the IRC channel will trigger a factoid that says:

    glusterd's management port is 24007/tcp and 24008/tcp if you use
    rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for
    3.4. (Deleted volumes do not reset this counter.) Additionally it
    will listen on 38465-38467/tcp for nfs, also 38468 for NLM since
    3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049
    since 3.4.


The documentation states that:

    Brick ports will now listen from 49152 onwards (instead of 24009
    onwards as with previous releases). The brick port assignment scheme
    is now compliant with IANA guidelines.

https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_34_Release_Notes#Brick+port+changes

That is, however, the only place that it's correct. There does need to 
be a patch to the documentation . It still refers to port 24009. 
Whatever documentation you were looking at which mentioned 34865 would 
have been talking about nfs.

On 5/5/2014 7:47 PM, Thing wrote:
> Using iptraf and dd to crate a 2gb file it looks like data is being 
> transferred from port 970 to port 49152.  yet the docs say 34865?
>
> ?
>
>
>
>
> On 6 May 2014 14:20, Thing <thing.thing at gmail.com 
> <mailto:thing.thing at gmail.com>> wrote:
>
>     Seem iptables is blocking sync, so what have I missed please?
>
>     ========
>     Chain IN_public_allow (1 references)
>     target     prot opt source destination
>     ACCEPT     tcp  -- 0.0.0.0/0 <http://0.0.0.0/0> 0.0.0.0/0
>     <http://0.0.0.0/0>            tcp dpt:2049 ctstate NEW
>     ACCEPT     tcp  -- 0.0.0.0/0 <http://0.0.0.0/0> 0.0.0.0/0
>     <http://0.0.0.0/0>            tcp dpt:22 ctstate NEW
>     ACCEPT     tcp  -- 0.0.0.0/0 <http://0.0.0.0/0> 0.0.0.0/0
>     <http://0.0.0.0/0>            tcp dpts:24009:24012 ctstate NEW
>     ACCEPT     udp  -- 0.0.0.0/0 <http://0.0.0.0/0> 0.0.0.0/0
>     <http://0.0.0.0/0>            udp dpt:111 ctstate NEW
>     ACCEPT     tcp  -- 0.0.0.0/0 <http://0.0.0.0/0> 0.0.0.0/0
>     <http://0.0.0.0/0>            tcp dpts:34865:34867 ctstate NEW
>     ACCEPT     tcp  -- 0.0.0.0/0 <http://0.0.0.0/0> 0.0.0.0/0
>     <http://0.0.0.0/0>            tcp dpt:111 ctstate NEW
>     ACCEPT     tcp  -- 0.0.0.0/0 <http://0.0.0.0/0> 0.0.0.0/0
>     <http://0.0.0.0/0>            tcp dpt:24007 ctstate NEW
>     =======
>
>
>     On 6 May 2014 13:18, Thing <thing.thing at gmail.com
>     <mailto:thing.thing at gmail.com>> wrote:
>
>         For RHEL6.5 what else do I need to install to allow mount to work?
>
>         =======8><----========
>         Installed:
>           glusterfs.x86_64 0:3.4.0.57rhs-1.el6_5
>
>         Complete!
>         [root at 8kxl72s ~]# mount -t glusterfs
>         rhel7rc-004.ods.vuw.ac.nz:gv0 /mnt/gluster1-gv0
>         mount: unknown filesystem type 'glusterfs'
>         ======
>
>
>
>         On 6 May 2014 12:28, Cary Tsai <f4lens at gmail.com
>         <mailto:f4lens at gmail.com>> wrote:
>
>             # gluster peer status
>             Number of Peers: 3
>
>             Hostname: us-east-2
>             Uuid: 3b102df3-74a7-4794-b300-b93bccfe8072
>             State: Peer in Cluster (Connected)
>
>             Hostname: us-west-1
>             Uuid: 98906a76-dd5b-4db9-99d5-1d51b1ee3d2a
>             State: Peer in Cluster (Connected)
>
>             Hostname: us-west-2
>             Uuid: 16eff965-ec88-4d12-adea-8512350bdaa7
>             State: Peer in Cluster (Connected)
>
>             # gluster volume  create  snoopy replica 4 transport tcp
>             192.168.255.5:/brick1 us-east-2:/brick1 us-west-1:/brick1
>             us-west-2:/brick1 force
>             volume create: snoopy: failed
>             -------------------------------------------------------------------
>             When I check the debug log, /var/log/glusterfs/cli.log ,
>             it shows:
>
>             [2014-05-06 00:17:29.988414] W
>             [rpc-transport.c:175:rpc_transport_load] 0-rpc-transport:
>             missing 'option transport-type'. defaulting to "socket"
>             [2014-05-06 00:17:29.988909] I [socket.c:3480:socket_init]
>             0-glusterfs: SSL support is NOT enabled
>             [2014-05-06 00:17:29.988930] I [socket.c:3495:socket_init]
>             0-glusterfs: using system polling thread
>             [2014-05-06 00:17:30.022545] I
>             [cli-cmd-volume.c:392:cli_cmd_volume_create_cbk] 0-cli:
>             Replicate cluster type found. Checking brick order.
>             [2014-05-06 00:17:30.022706] I
>             [cli-cmd-volume.c:304:cli_cmd_check_brick_order] 0-cli:
>             Brick order okay
>             [2014-05-06 00:17:30.273942] I
>             [cli-rpc-ops.c:805:gf_cli_create_volume_cbk] 0-cli:
>             Received resp to create volume
>             [2014-05-06 00:17:30.274027] I [input.c:36:cli_batch] 0-:
>             Exiting with: -1
>
>             What did I do wrong? Is more details I can read to figure
>             out why my volume create failed?
>             Thanks
>
>             _______________________________________________
>             Gluster-users mailing list
>             Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>             http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140507/f436fc14/attachment.html>


More information about the Gluster-users mailing list