[Gluster-users] NFS re-export of guster mounted disks
Gerald Brandt
gbr at majentis.com
Thu Sep 29 19:49:03 UTC 2011
Just for the record, this: http://community.gluster.org/p/nfs-performance-with-fuse-client-redundancy/ is what I was trying to do.
I'll set up a test system next week and see how it works.
Gerald
----- Original Message -----
> From: "Gerald Brandt" <gbr at majentis.com>
> To: gluster-users at gluster.org
> Sent: Tuesday, September 13, 2011 11:14:06 AM
> Subject: [Gluster-users] NFS re-export of guster mounted disks
>
> Hi,
>
> I hope I can explain this properly.
>
> 1. I have a two brick system replicating each other. (10.1.4.181 and
> 10.1.40.2)
> 2. I have a third system that mounts the gluster fileshares
> (192.168.30.111)
> 3. I export the share on 192.168.30.111 as NFS to a XenServer.
>
> What I'm hoping for, is the aggregate speeds of gluster to the 2
> servers shows up when exported as NFS (roughly 2 GigE).
>
> When I use the linux kernel nfs server, my speeds are atrocious.
>
> When I use the gluster NFS server, I get complete failure.
>
> The error I get when I try to use the gluster nfs server on a gluster
> mounted disk is:
>
> [2011-09-13 10:38:45.265011] E [socket.c:1685:socket_connect_finish]
> 0-datastore-client-0: connection to 192.168.30.111:24007 failed
> (Connection refused)
> [2011-09-13 10:39:06.742844] I [nfs.c:704:init] 0-nfs: NFS service
> started
> [2011-09-13 10:39:06.742980] W [write-behind.c:3030:init]
> 0-datastore-write-behind: disabling write-behind for first 0 bytes
> [2011-09-13 10:39:06.745879] I [client.c:1935:notify]
> 0-datastore-client-0: parent translators are ready, attempting
> connect on transport
> Given volfile:
> +------------------------------------------------------------------------------+
> 1: volume datastore-client-0
> 2: type protocol/client
> 3: option remote-host 192.168.30.111
> 4: option remote-subvolume /filer1
> 5: option transport-type tcp
> 6: end-volume
> 7:
> 8: volume datastore-write-behind
> 9: type performance/write-behind
> 10: subvolumes datastore-client-0
> 11: end-volume
> 12:
> 13: volume datastore-read-ahead
> 14: type performance/read-ahead
> 15: subvolumes datastore-write-behind
> 16: end-volume
> 17:
> 18: volume datastore-io-cache
> 19: type performance/io-cache
> 20: subvolumes datastore-read-ahead
> 21: end-volume
> 22:
> 23: volume datastore-quick-read
> 24: type performance/quick-read
> 25: subvolumes datastore-io-cache
> 26: end-volume
> 27:
> 28: volume datastore
> 29: type debug/io-stats
> 30: option latency-measurement off
> 31: option count-fop-hits off
> 32: subvolumes datastore-quick-read
> 33: end-volume
> 34:
> 35: volume nfs-server
> 36: type nfs/server
> 37: option nfs.dynamic-volumes on
> 38: option rpc-auth.addr.datastore.allow *
> 39: option nfs3.datastore.volume-id
> 1ce79bc5-0e1a-4ab9-98ba-be38166101fa
> 40: option nfs.port 2049
> 41: subvolumes datastore
> 42: end-volume
>
> +------------------------------------------------------------------------------+
> [2011-09-13 10:39:06.747340] I [rpc-clnt.c:1531:rpc_clnt_reconfig]
> 0-datastore-client-0: changing port to 24009 (from 0)
> [2011-09-13 10:39:09.748308] I
> [client-handshake.c:1082:select_server_supported_programs]
> 0-datastore-client-0: Using Program GlusterFS-3.1.0, Num (1298437),
> Version (310)
> [2011-09-13 10:39:09.751963] I
> [client-handshake.c:913:client_setvolume_cbk] 0-datastore-client-0:
> Connected to 192.168.30.111:24009, attached to remote volume
> '/filer1'.
> [2011-09-13 10:39:09.758352] I
> [client3_1-fops.c:2228:client3_1_lookup_cbk] 0-datastore-client-0:
> remote operation failed: No data available
> [2011-09-13 10:39:09.758383] C
> [nfs.c:240:nfs_start_subvol_lookup_cbk] 0-nfs: Failed to lookup
> root: No data available
>
>
> This is the 'gluster volume info' for each server:
>
> The two backend gluster servers:
>
> Volume Name: datastore
> Type: Replicate
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.1.40.2:/nfs/disk3
> Brick2: 10.1.4.181:/glusetr
> Options Reconfigured:
> nfs.port: 2049
> nfs.trusted-sync: on
>
> The server that mounts the two gluster servers and re-exports it as
> NFS
>
> fstab:
> 192.168.30.1:/datastore /filer1 glusterfs
> noatime 0 0
>
>
>
> Volume Name: datastore
> Type: Distribute
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.30.111:/filer1
> Options Reconfigured:
> nfs.port: 2049
>
>
>
> A rough diagram of the network:
>
> brick1 brick2 (physical machines) (machine 1 and 2)
> | |
> | (a) | (b)
> | |
> -----||-----
> ||
> ||(c)
> ||
> gluster client (XenServer VM) (machine 3 - virtual)
> |
> |(d)
> |
> NFS (Same XenServer VM as above) (machine 3 -
> virtual)
> |
> |(e)
> |
> XenServer (mounts NFS export from VM)
>
> Where:
>
> (a) 1 GigE TCP/IP (physical NIC) (10.1.4.181)
> (b) 1 GigE TCP/IP (physical NIC) (10.1.40.2)
> (c) aggregate ~ 2GiGE (2 physical NICs)
> (d) aggregate ~ 2GigE (XenServer virtual NIC) (192.168.30.111)
> (e) aggregate ~ 2GigE (XenServer virtual NIC)
>
>
> I know I'll get a performance hitting going through multiple
> layers/machines, but I'm hoping the aggregate throughput offsets
> that.
>
> Does anyone know if gluster can export a gluster mounted fileshare?
>
> Thanks,
> Gerald
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list