[Gluster-users] Change NFS parameters post-start

Harry Mangalam hjmangalam at gmail.com
Fri Jul 27 22:29:41 UTC 2012


In trying to convert clients from using the gluster native client to
an NFS client, I'm trying to get the gluster volume mounted on a test
mount point on the same client that the native client has mounted the
volume.  The client refuses with the error:

 mount -t nfs bs1:/gl /mnt/glnfs
mount: bs1:/gl failed, reason given by server: No such file or directory

In looking at the gluster nfs.log, it looks like the nfs volume was
mounted RDMA-only, which is odd, seeing that the 3.3-1 does not fully
support RDMA:

the nfs.log is 99% these failure messages:

E [rdma.c:4458:tcp_connect_finish] 0-gl-client-2: tcp connect to <etc>

but the few messages that aren't these reveal that:

  1: volume gl-client-0
  2:     type protocol/client
  3:     option remote-host bs2
  4:     option remote-subvolume /raid1
  5:     option transport-type rdma                  <-------------------------
  6:     option username a2994eef-60d6-4609-a6d1-8d760cf82424
  7:     option password bbf8e05d-6ada-4371-99d0-09b4c55cc899
  8: end-volume

The volume was created tcp,rdma (before I realized that rdma was
temporarily deprecated):

Volume Name: gl
Type: Distribute
Volume ID: 21f480f7-fc5a-4fd8-a084-3964634a9332
Status: Started
Number of Bricks: 8
Transport-type: tcp,rdma
Bricks:
Brick1: bs2:/raid1
Brick2: bs2:/raid2
Brick3: bs3:/raid1
Brick4: bs3:/raid2
Brick5: bs4:/raid1
Brick6: bs4:/raid2
Brick7: bs1:/raid1
Brick8: bs1:/raid2
Options Reconfigured:
performance.write-behind-window-size: 1024MB
performance.flush-behind: on
performance.cache-size: 268435456
nfs.disable: off
performance.io-cache: on
performance.quick-read: on
performance.io-thread-count: 64
auth.allow: 10.2.*.*,10.1.*.*

 and the gluster clients talk to it just fine over IPoIB

But the NFS client apparently insists on trying to use RDMA, which
isn't being used


I didn't originally ask for or want the NFS subsystem and later turned
it off, (nfs.disable = on) but now i want to use it and I'd like to be
able to tell it to use sockets/TCP.  Is there a way to  do this after
the fact?  That is, without destroying and re-creating the current
volume as tcp-only.

I have another gluster FS where the transport type is set to tcp and
it's working fine under NFS:

 1: volume gli-client-0
 2:     type protocol/client
 3:     option remote-host pbs1ib
 4:     option remote-subvolume /bducgl
 5:     option transport-type tcp
 6:     option username c173a866-a561-4da9-b977-93f8df4766a1
 7:     option password 09480722-0b0f-4b41-bc73-9970fe129d27
 8: end-volume


hjm

-- 
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
415 South Circle View Dr, Irvine, CA, 92697 [shipping]
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)



More information about the Gluster-users mailing list