[Gluster-users] nfs acces denied

VAN CAUSBROECK Wannes Wannes.VANCAUSBROECK at onprvp.fgov.be
Tue Apr 1 11:49:22 UTC 2014


Hello Carlos,

This is just one of the volumes that has the problem, but there are several :

[root at lpr-nas01 ~]# gluster vol info caviar_data11
Volume Name: caviar_data11
Type: Distribute
Volume ID: 4440ef2e-5d60-41a9-bac8-cf4751fc9be2
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: lpr-nas01:/brick-xiv/caviar_data11

[root at lpr-nas01 ~]# gluster vol status caviar_data11 detail
Status of volume: caviar_data11
------------------------------------------------------------------------------
Brick                : Brick lpr-nas01:/brick-xiv/caviar_data11
Port                 : 49186
Online               : Y
Pid                  : 4592
File System          : ext4
Device               : /dev/sdb
Mount Options        : rw
Inode Size           : 256
Disk Space Free      : 7.2TB
Total Disk Space     : 22.7TB
Inode Count          : 381786112
Free Inodes          : 115286880


Mounting with glusterfs works fine and I have no problems whatsoever.
Nfs service is switched off
No auto-heal is running
Name resolution shouldn't be a problem as the bricks are mounted on the same host running the gluster daemon. I also test the mount from that server.

[root at lpr-nas01 ~]# ping lpr-nas01
PING lpr-nas01.onprvp.fgov.be (192.168.151.21) 56(84) bytes of data.
64 bytes from lpr-nas01.onprvp.fgov.be (192.168.151.21): icmp_seq=1 ttl=64 time=0.023 ms
[root at lpr-nas01 ~]# nslookup 192.168.151.21
Server:                 192.168.147.31
Address:             192.168.147.31#53

21.151.168.192.in-addr.arpa      name = lpr-nas01.onprvp.fgov.be.


The files were created on a linux machine with those uid/gid, the windows machine need only read acces, so it always falls into the "others" category.
Selinux/iptables are switched off.


Thanks for helping out!


From: Carlos Capriotti [mailto:capriotti.carlos at gmail.com]
Sent: maandag 31 maart 2014 18:03
To: VAN CAUSBROECK Wannes
Cc: gluster-users at gluster.org1)
Subject: Re: [Gluster-users] nfs acces denied

maybe it would be nice to see your volume info for affected volumes.

Also, on the server side, what happens if you mount the share using glusterfs instead of nfs ?

any change the native nfs server is running on your server ?

Are there any auto-heal processes running ?

There are a few name resolution messages on your logs, that seem to refer to the nodes themselves. Any DNS conflicts ? Maybe add the names of servers to the hosts file ?

You MS client seems to be having issues with user/group translation. It seems to create files with gid 1003. (I could be wrong).

Again, is SElinux/ACLs/iptables disabled ?

All is very inconclusive os far.

On Mon, Mar 31, 2014 at 5:26 PM, VAN CAUSBROECK Wannes <Wannes.VANCAUSBROECK at onprvp.fgov.be<mailto:Wannes.VANCAUSBROECK at onprvp.fgov.be>> wrote:
Well, with 'client' i do actually mean the server itself.
i've tried forcing linux and windows to nfs V3 and tcp, and on windows i played around with the uid and gid, but the result is always the same


On 31 Mar 2014, at 17:22, "Carlos Capriotti" <capriotti.carlos at gmail.com<mailto:capriotti.carlos at gmail.com>> wrote:
Well, saying your client-side is "linux" does not help much. Distro, flavor, etc helps a lot, but I'll take a wild guess here.

First, force your NFS mount (client) to use nfs version 3.

The same for Microsoft. (It is fair to say I have no idea if the MS client supports v4 or not).

Additionally, check that firewalls are disabled on both sides, just for testing. The same goes for SElinux.

Windows and ACL, and user mapping is something that might be in your way too. There is a Technet document that describes how to handle this mapping if I am not wrong.
Just for testing, mount your nfs share you your own server, using localhost:/nfs_share and see how it goes.

It is a good start.

Kr,

Carlos

On Mon, Mar 31, 2014 at 3:58 PM, VAN CAUSBROECK Wannes <Wannes.VANCAUSBROECK at onprvp.fgov.be<mailto:Wannes.VANCAUSBROECK at onprvp.fgov.be>> wrote:
Hello all,

I've already tried to post this, but i'm unsure it arrived to the mailing list.

I have some issues regarding my nfs mounts. My setup is as follows:
Rhel 6.4, gluster 3.4.2-1 running on a vm (4 cores, 8GB ram) attached to a san. I have one disk on which are all the bricks (formatted ext4 in 64 bit mode) of 25TB.
On the gluster side of things, everything works without issues. The trouble starts when I mount a volume as an nfs mount.
Lots of volumes work without issues, but others behave strangely. The volumes that act weird generally contain many files (can be accidental?).
The volumes in question mount without issues, but when I try to go into any subdirectory sometimes it works, sometimes I get errors.

On windows with nfs client: access denied

In nfslog:
[2014-03-31 13:57:58.771241] I [dht-layout.c:638:dht_layout_normalize] 0-caviar_data11-dht: found anomalies in <gfid:c8d94120-6851-46ea-9f28-c629a44b1015>. holes=1 overlaps=0
[2014-03-31 13:57:58.771348] E [nfs3-helpers.c:3595:nfs3_fh_resolve_inode_lookup_cbk] 0-nfs-nfsv3: Lookup failed: <gfid:c8d94120-6851-46ea-9f28-c629a44b1015>: Invalid argument
[2014-03-31 13:57:58.771380] E [nfs3.c:1380:nfs3_lookup_resume] 0-nfs-nfsv3: Unable to resolve FH: (192.168.148.46:984<http://192.168.148.46:984>) caviar_data11 : c8d94120-6851-46ea-9f28-c629a44b1015
[2014-03-31 13:57:58.771819] W [nfs3-helpers.c:3380:nfs3_log_common_res] 0-nfs-nfsv3: XID: 1ec28530, LOOKUP: NFS: 22(Invalid argument for operation), POSIX: 14(Bad address)
[2014-03-31 13:57:58.798967] I [dht-layout.c:638:dht_layout_normalize] 0-caviar_data11-dht: found anomalies in <gfid:14972193-1039-4d7a-aed5-0d7e7eccf57b>. holes=1 overlaps=0
[2014-03-31 13:57:58.799039] E [nfs3-helpers.c:3595:nfs3_fh_resolve_inode_lookup_cbk] 0-nfs-nfsv3: Lookup failed: <gfid:14972193-1039-4d7a-aed5-0d7e7eccf57b>: Invalid argument
[2014-03-31 13:57:58.799056] E [nfs3.c:1380:nfs3_lookup_resume] 0-nfs-nfsv3: Unable to resolve FH: (192.168.148.46:984<http://192.168.148.46:984>) caviar_data11 : 14972193-1039-4d7a-aed5-0d7e7eccf57b
[2014-03-31 13:57:58.799088] W [nfs3-helpers.c:3380:nfs3_log_common_res] 0-nfs-nfsv3: XID: 1ec28531, LOOKUP: NFS: 22(Invalid argument for operation), POSIX: 14(Bad address)
....


On linux:
[root at lpr-nas01 brick-xiv2]# ll /media/2011/201105/20110530/
ls: /media/2011/201105/20110530/37: No such file or directory
total 332
...
drwxrwsr-x 2 nfsnobody 1003 4096 Jun  6  2011 32
drwxrwsr-x 2 nfsnobody 1003 4096 Jun  6  2011 34
drwxrwsr-x 2 nfsnobody 1003 4096 Jun  6  2011 35
drwxrwsr-x 2 nfsnobody 1003 4096 Jun  6  2011 36
drwxrwsr-x 2 nfsnobody 1003 4096 Jun  6  2011 37
...

[root at lpr-nas01 brick-xiv2]# ll /media/2011/201105/20110530/37
ls: /media/2011/201105/20110530/37/NN.0000073824357.00001.tif: No such file or directory
ls: /media/2011/201105/20110530/37/NN.0000073824357.00003.tif: No such file or directory
total 54
-rwxrwxr-x 0 nfsnobody 1003  9340 Jun  6  2011 NN.0000073824357.00001.tif
-rwxrwxr-x 1 nfsnobody 1003 35312 Jun  6  2011 NN.0000073824357.00002.tif
-rwxrwxr-x 0 nfsnobody 1003  9340 Jun  6  2011 NN.0000073824357.00003.tif


I see in the nfslog:
...
[2014-03-31 12:44:18.941083] I [dht-layout.c:638:dht_layout_normalize] 0-caviar_data11-dht: found anomalies in /2011/201107/20110716/55. holes=1 overlaps=0
[2014-03-31 12:44:18.958078] I [dht-layout.c:638:dht_layout_normalize] 0-caviar_data11-dht: found anomalies in /2011/201107/20110716/30. holes=1 overlaps=0
[2014-03-31 12:44:18.959980] I [dht-layout.c:638:dht_layout_normalize] 0-caviar_data11-dht: found anomalies in /2011/201107/20110716/90. holes=1 overlaps=0
[2014-03-31 12:44:18.961094] E [dht-helper.c:429:dht_subvol_get_hashed] (-->/usr/lib64/glusterfs/3.4.2/xlator/debug/io-stats.so(io_stats_lookup+0x157) [0x7fd6a61282e7] (-->/usr/lib64/libglusterfs.so.0(default_lookup+0x6d) [0x3dfe01c03d] (-->/usr/lib64/glusterfs/3.4.2/xlator/cluster/distribute.so(dht_lookup+0xa7e) [0x7fd6a656af2e]))) 0-caviar_data11-dht: invalid argument: loc->parent
[2014-03-31 12:44:18.961283] W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 0-caviar_data11-client-0: remote operation failed: Invalid argument. Path: <gfid:00000000-0000-0000-0000-000000000000> (00000000-0000-0000-0000-000000000000)
[2014-03-31 12:44:18.961319] E [acl3.c:334:acl3_getacl_resume] 0-nfs-ACL: Unable to resolve FH: (192.168.151.21:740<http://192.168.151.21:740>) caviar_data11 : 00000000-0000-0000-0000-000000000000
[2014-03-31 12:44:18.961338] E [acl3.c:342:acl3_getacl_resume] 0-nfs-ACL: unable to open_and_resume
...

The weirdest thing is it changes from time to time which files and directories work and which don't
Any ideas?

Thanks!

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://supercolony.gluster.org/mailman/listinfo/gluster-users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140401/cfe5045c/attachment.html>


More information about the Gluster-users mailing list