[Gluster-users] Solaris NFS client and nfs.rpc

Olivier.Franki at smals.be Olivier.Franki at smals.be
Thu Oct 10 10:58:55 UTC 2013


Hi,

we got a strange security issue when connecting a Solaris NFS client to 
Gluster volumes
Initially, we tried to share a volume between a Linux Client (10.1.99.200) 
and a Solaris Client (10.1.99.201)

We create this volume 

[root at llsmagfs001a glusterfs]# gluster volume info vol1

Volume Name: vol1
Type: Distribute
Volume ID: 4abcee08-6172-441a-851b-53becb77c281
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: llsmagfs001a.cloud.testsc.sc:/export/vol1
Options Reconfigured:
diagnostics.client-log-level: DEBUG
diagnostics.brick-log-level: DEBUG
auth.allow: 10.1.99.200
nfs.rpc-auth-allow: 10.1.99.201
diagnostics.client-sys-log-level: WARNING
diagnostics.brick-sys-log-level: WARNING


The volume is exported only for the Solaris client (via 
nfs.rpc-auth-allow)

[root at llsmagfs001a glusterfs]# showmount -e 10.1.99.202
Export list for 10.1.99.202:
/vol1 10.1.99.201

If we try to mount this volume via NFS from the Linux client, we receive 
an access denied as expected

[root at llsmaofr001a mnt]# ifconfig eth0 | grep "inet addr"
          inet addr:10.1.99.200  Bcast:10.1.99.255  Mask:255.255.254.0
[root at llsmaofr001a mnt]# mount -t nfs -o vers=3 10.1.99.202:/vol1 
/mnt/vol1
mount.nfs: access denied by server while mounting 10.1.99.202:/vol1

But if we try to mount this volume from another Solaris Client 
(10.1.98.66), we do not receive an access denied

# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 
index 1
        inet 127.0.0.1 netmask ff000000
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 10.1.98.66 netmask fffffe00 broadcast 10.1.99.255
        ether 0:14:4f:5e:32:aa
# mount -o vers=3 nfs://10.1.99.202/vol1 /mnt
# mount | grep nfs
/mnt on nfs://10.1.99.202/vol1 
remote/read/write/setuid/devices/vers=3/xattr/dev=594001d on Thu Oct 10 
11:48:15 2013
# echo "test from solaris" > /mnt/test.solaris
# ls /mnt
test.solaris

Tested with
- Solaris 10 and Solaris 11
- RHEL6
- GlusterFS 3.3.1-1, GlusterFS 3.4.0-2 and GlusterFS 3.4.1-2

Do we have to set another option to enforce rpc auth for Solaris Client ?


Debug message (when trying to mount the volume from the linux client via 
NFS)

[2013-10-10 10:14:07.578302] D [socket.c:463:__socket_rwv] 
0-socket.nfs-server: would have passed zero length to read/write
[2013-10-10 10:14:07.579045] D [socket.c:486:__socket_rwv] 
0-socket.nfs-server: EOF on socket
[2013-10-10 10:14:07.579078] D [socket.c:2236:socket_event_handler] 
0-transport: disconnecting now
[2013-10-10 10:14:07.587459] D [socket.c:463:__socket_rwv] 
0-socket.nfs-server: would have passed zero length to read/write
[2013-10-10 10:14:07.588021] D [socket.c:486:__socket_rwv] 
0-socket.nfs-server: EOF on socket
[2013-10-10 10:14:07.588076] D [socket.c:2236:socket_event_handler] 
0-transport: disconnecting now
[2013-10-10 10:14:07.589570] D [socket.c:463:__socket_rwv] 
0-socket.nfs-server: would have passed zero length to read/write
[2013-10-10 10:14:07.590260] D [mount3.c:912:mnt3svc_mnt] 0-nfs-mount: 
dirpath: /vol1
[2013-10-10 10:14:07.590293] D [mount3.c:855:mnt3_find_export] 
0-nfs-mount: dirpath: /vol1
[2013-10-10 10:14:07.590309] D [mount3.c:749:mnt3_mntpath_to_export] 
0-nfs-mount: Found export volume: vol1
[2013-10-10 10:14:07.590339] I [mount3.c:787:mnt3_check_client_net] 
0-nfs-mount: Peer 10.1.99.200:860  not allowed
[2013-10-10 10:14:07.590353] D [mount3.c:934:mnt3svc_mnt] 0-nfs-mount: 
Client mount not allowed
[2013-10-10 10:14:07.591104] D [socket.c:486:__socket_rwv] 
0-socket.nfs-server: EOF on socket
[2013-10-10 10:14:07.591171] D [socket.c:2236:socket_event_handler] 
0-transport: disconnecting now


Debug message (when trying to mount the volume from the solaris client via 
NFS)

[2013-10-10 10:17:15.444951] D 
[nfs3-helpers.c:1641:nfs3_log_fh_entry_call] 0-nfs-nfsv3: XID: 5250f479, 
LOOKUP: args: FH: exportid 00000000-0000-0000-0000-000000000000, gfid 
00000000-0000-0000-0000-000000000000, name: vol1
[2013-10-10 10:17:15.446010] D [nfs3-helpers.c:3458:nfs3_log_newfh_res] 
0-nfs-nfsv3: XID: 5250f479, LOOKUP: NFS: 0(Call completed successfully.), 
POSIX: 117(Structure needs cleaning), FH: exportid 
4abcee08-6172-441a-851b-53becb77c281, gfid 
00000000-0000-0000-0000-000000000001
[2013-10-10 10:17:15.446539] D 
[nfs3-helpers.c:1641:nfs3_log_fh_entry_call] 0-nfs-nfsv3: XID: 5250f478, 
LOOKUP: args: FH: exportid 00000000-0000-0000-0000-000000000000, gfid 
00000000-0000-0000-0000-000000000000, name: vol1
[2013-10-10 10:17:15.447234] D [nfs3-helpers.c:3458:nfs3_log_newfh_res] 
0-nfs-nfsv3: XID: 5250f478, LOOKUP: NFS: 0(Call completed successfully.), 
POSIX: 117(Structure needs cleaning), FH: exportid 
4abcee08-6172-441a-851b-53becb77c281, gfid 
00000000-0000-0000-0000-000000000001
[2013-10-10 10:17:15.448077] D [socket.c:486:__socket_rwv] 
0-socket.nfs-server: EOF on socket
[2013-10-10 10:17:15.448133] D [socket.c:2236:socket_event_handler] 
0-transport: disconnecting now
[2013-10-10 10:17:15.469271] D [nfs3-helpers.c:1627:nfs3_log_common_call] 
0-nfs-nfsv3: XID: 5ed48474, FSINFO: args: FH: exportid 
4abcee08-6172-441a-851b-53becb77c281, gfid 
00000000-0000-0000-0000-000000000001
[2013-10-10 10:17:15.469601] D [nfs3-helpers.c:3389:nfs3_log_common_res] 
0-nfs-nfsv3: XID: 5ed48474, FSINFO: NFS: 0(Call completed successfully.), 
POSIX: 117(Structure needs cleaning)
[2013-10-10 10:17:15.470341] D [nfs3-helpers.c:1627:nfs3_log_common_call] 
0-nfs-nfsv3: XID: 5ed48475, FSSTAT: args: FH: exportid 
4abcee08-6172-441a-851b-53becb77c281, gfid 
00000000-0000-0000-0000-000000000001
[2013-10-10 10:17:15.471159] D [nfs3-helpers.c:3389:nfs3_log_common_res] 
0-nfs-nfsv3: XID: 5ed48475, FSSTAT: NFS: 0(Call completed successfully.), 
POSIX: 117(Structure needs cleaning)

Regards,

Olivier

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131010/1e818bc6/attachment.html>


More information about the Gluster-users mailing list