[Gluster-users] Change transport-type on volume from tcp to rdma, tcp

Geoffrey Letessier geoffrey.letessier at cnrs.fr
Tue Jul 21 14:45:42 UTC 2015


Oops, I made this change on every volumes I have but I can’t mount these with the other transport type… For example: with my vol_shared volume (with transport-type previously set to RDMA), i try to mount it with TCP transport type, it failed as you can read below:
[2015-07-21 14:36:30.473014] I [MSGID: 100030] [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.2 (args: /usr/sbin/glusterfs --enable-ino32 --direct-io-mode=disable --volfile-server=ib-storage2 --volfile-server-transport=tcp --volfile-id=vol_shared.tcp /shared)
[2015-07-21 14:36:30.484964] W [socket.c:923:__socket_keepalive] 0-socket: failed to set TCP_USER_TIMEOUT 0 on socket 9, Protocole non disponible
[2015-07-21 14:36:30.485009] E [socket.c:3015:socket_connect] 0-glusterfs: Failed to set keep-alive: Protocole non disponible
[2015-07-21 14:36:30.485241] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-07-21 14:36:30.494467] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2015-07-21 14:36:30.495321] I [MSGID: 114020] [client.c:2118:notify] 0-vol_shared-client-0: parent translators are ready, attempting connect on transport
[2015-07-21 14:36:30.498989] W [socket.c:923:__socket_keepalive] 0-socket: failed to set TCP_USER_TIMEOUT 0 on socket 12, Protocole non disponible
[2015-07-21 14:36:30.499004] E [socket.c:3015:socket_connect] 0-vol_shared-client-0: Failed to set keep-alive: Protocole non disponible
[2015-07-21 14:36:30.499116] I [MSGID: 114020] [client.c:2118:notify] 0-vol_shared-client-1: parent translators are ready, attempting connect on transport
[2015-07-21 14:36:30.499761] E [MSGID: 114058] [client-handshake.c:1525:client_query_portmap_cbk] 0-vol_shared-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2015-07-21 14:36:30.499809] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-vol_shared-client-0: disconnected from vol_shared-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2015-07-21 14:36:30.502513] W [socket.c:923:__socket_keepalive] 0-socket: failed to set TCP_USER_TIMEOUT 0 on socket 12, Protocole non disponible
[2015-07-21 14:36:30.502529] E [socket.c:3015:socket_connect] 0-vol_shared-client-1: Failed to set keep-alive: Protocole non disponible
Final graph:
+------------------------------------------------------------------------------+
  1: volume vol_shared-client-0
  2:     type protocol/client
  3:     option ping-timeout 42
  4:     option remote-host ib-storage1
  5:     option remote-subvolume /export/brick_shared/data
  6:     option transport-type socket
  7:     option send-gids true
  8: end-volume
  9:  
 10: volume vol_shared-client-1
 11:     type protocol/client
 12:     option ping-timeout 42
 13:     option remote-host ib-storage2
 14:     option remote-subvolume /export/brick_shared/data
 15:     option transport-type socket
 16:     option send-gids true
 17: end-volume
 18:  
 19: volume vol_shared-replicate-0
 20:     type cluster/replicate
 21:     subvolumes vol_shared-client-0 vol_shared-client-1
 22: end-volume
 23:  
 24: volume vol_shared-dht
 25:     type cluster/distribute
 26:     option min-free-disk 5%
 27:     subvolumes vol_shared-replicate-0
 28: end-volume
 29:  
 30: volume vol_shared-write-behind
 31:     type performance/write-behind
 32:     subvolumes vol_shared-dht
 33: end-volume
 34:  
 35: volume vol_shared-readdir-ahead
 36:     type performance/readdir-ahead
 37:     subvolumes vol_shared-write-behind
 38: end-volume
 39:  
 40: volume vol_shared-io-cache
 41:     type performance/io-cache
 42:     option cache-size 1GB
 43:     subvolumes vol_shared-readdir-ahead
 44: end-volume
 45:  
 46: volume vol_shared-quick-read
 47:     type performance/quick-read
 48:     option cache-size 1GB
 49:     subvolumes vol_shared-io-cache
 50: end-volume
 51:  
 52: volume vol_shared-open-behind
 53:     type performance/open-behind
 54:     subvolumes vol_shared-quick-read
 55: end-volume
 56:  
 57: volume vol_shared-md-cache
 58:     type performance/md-cache
 59:     subvolumes vol_shared-open-behind
 60: end-volume
 61:  
 62: volume vol_shared
 63:     type debug/io-stats
 64:     option latency-measurement off
 65:     option count-fop-hits off
 66:     subvolumes vol_shared-md-cache
 67: end-volume
 68:  
 69: volume meta-autoload
 70:     type meta
 71:     subvolumes vol_shared
 72: end-volume
 73:  
+------------------------------------------------------------------------------+
[2015-07-21 14:36:30.503372] E [MSGID: 114058] [client-handshake.c:1525:client_query_portmap_cbk] 0-vol_shared-client-1: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2015-07-21 14:36:30.503421] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-vol_shared-client-1: disconnected from vol_shared-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2015-07-21 14:36:30.503439] E [MSGID: 108006] [afr-common.c:3922:afr_notify] 0-vol_shared-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2015-07-21 14:36:30.508393] I [fuse-bridge.c:5086:fuse_graph_setup] 0-fuse: switched to graph 0
[2015-07-21 14:36:30.509122] I [fuse-bridge.c:4012:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.13
[2015-07-21 14:36:30.509278] I [afr-common.c:4053:afr_local_init] 0-vol_shared-replicate-0: no subvolumes up
[2015-07-21 14:36:30.509432] I [afr-common.c:4053:afr_local_init] 0-vol_shared-replicate-0: no subvolumes up
[2015-07-21 14:36:30.509463] W [fuse-bridge.c:780:fuse_attr_cbk] 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Noeud final de transport n'est pas connecté)
[2015-07-21 14:36:30.523106] I [fuse-bridge.c:4933:fuse_thread_proc] 0-fuse: unmounting /shared
[2015-07-21 14:36:30.523807] W [glusterfsd.c:1219:cleanup_and_exit] (--> 0-: received signum (15), shutting down
[2015-07-21 14:36:30.523840] I [fuse-bridge.c:5628:fini] 0-fuse: Unmounting '/shared'.

# gluster volume info vol_shared
 
Volume Name: vol_shared
Type: Replicate
Volume ID: 64cdf649-e800-4f18-a940-398526775619
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp,rdma
Bricks:
Brick1: ib-storage1:/export/brick_shared/data
Brick2: ib-storage2:/export/brick_shared/data
Options Reconfigured:
config.transport: tcp,rdma
auth.allow: 10.0.*
cluster.min-free-disk: 5%
performance.cache-size: 1GB
performance.io-thread-count: 32
diagnostics.brick-log-level: CRITICAL
nfs.disable: on
performance.read-ahead: off
performance.readdir-ahead: on

An idea?

In addition, after having restart my volume in gluster, I cannot mount it, whatever the transport-type specified...

Thanks by advance,
Geoffrey
------------------------------------------------------
Geoffrey Letessier
Responsable informatique & ingénieur système
UPR 9080 - CNRS - Laboratoire de Biochimie Théorique
Institut de Biologie Physico-Chimique
13, rue Pierre et Marie Curie - 75005 Paris
Tel: 01 58 41 50 93 - eMail: geoffrey.letessier at ibpc.fr

> Le 21 juil. 2015 à 15:37, Soumya Koduri <skoduri at redhat.com> a écrit :
> 
> 
> 
> On 07/21/2015 02:40 PM, Geoffrey Letessier wrote:
>> Dears,
>> 
>> Is it exist a way to modify GlusterFS volumes transport-type settings?
>> Indeed, I’ve previously set the transport-type parameter to tcp for my
>> main volume and I would like to re-set it from tcp to rdma,tcp.
>> 
> 
> [1] captures most of the details required to configure rdma volumes and change transport type. I think below command should work in your case.
> 
> # gluster volume set volname config.transport tcp,rdma
> 
> [1] - http://gluster.readthedocs.org/en/latest/Administrator%20Guide/RDMA%20Transport/
> 
> HTH,
> Soumya
> 
>> Thanks in advance,
>> Cordially
>> Geoffrey
>> ------------------------------------------------------
>> Geoffrey Letessier
>> Responsable informatique & ingénieur système
>> UPR 9080 - CNRS - Laboratoire de Biochimie Théorique
>> Institut de Biologie Physico-Chimique
>> 13, rue Pierre et Marie Curie - 75005 Paris
>> Tel: 01 58 41 50 93 - eMail: geoffrey.letessier at ibpc.fr
>> <mailto:geoffrey.letessier at ibpc.fr>
>> 
>> 
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150721/59c65ac6/attachment.html>


More information about the Gluster-users mailing list