[Gluster-users] nfs acces denied

Carlos Capriotti capriotti.carlos at gmail.com
Tue Apr 1 15:11:35 UTC 2014


Wannes:

There is a current issue with EXT4, the Linux Kernel and Gluster. It is a
long story, but, in the end there is a bug there that won't be addressed
any time soon.

If at all possible, like gluster's/ RH's documentation recommends, use XFS
and make sure you prepare the partition with -i size=512.

At least then you are fully compliant.




On Tue, Apr 1, 2014 at 4:21 PM, VAN CAUSBROECK Wannes <
Wannes.VANCAUSBROECK at onprvp.fgov.be> wrote:

>  Hello Joe,
>
>
>
> Could it be that gluster has issues with the 64 bit ext4 filesystem? As a
> test I'm moving one of the volumes to an xfs volume... I hope I'll be able to
> answer that question myself soon.
>
>
>
>
>
> *From:* Joe Julian [mailto:joe at julianfamily.org]
> *Sent:* dinsdag 1 april 2014 16:14
> *To:* VAN CAUSBROECK Wannes; 'gluster-users at gluster.org'
>
> *Subject:* Re: [Gluster-users] nfs acces denied
>
>
>
> [2014-03-31 12:44:18.941083] I [dht-layout.c:638:dht_layout_normalize]
> 0-caviar_data11-dht: found anomalies in /2011/201107/20110716/55. holes=1
> overlaps=0
>
>
> This tells us that there is a chunk of hash map missing. My guess would be
> that the server you're nfs mounting from cannot connect to one of the
> bricks.
>
>  On March 31, 2014 6:58:53 AM PDT, VAN CAUSBROECK Wannes <
> Wannes.VANCAUSBROECK at onprvp.fgov.be> wrote:
>
> Hello all,
>
>
>
> I've already tried to post this, but i'm unsure it arrived to the mailing
> list.
>
>
>
> I have some issues regarding my nfs mounts. My setup is as follows:
>
> Rhel 6.4, gluster 3.4.2-1 running on a vm (4 cores, 8GB ram) attached to a
> san. I have one disk on which are all the bricks (formatted ext4 in 64 bit
> mode) of 25TB.
>
> On the gluster side of things, everything works without issues. The
> trouble starts when I mount a volume as an nfs mount.
>
> Lots of volumes work without issues, but others behave strangely. The
> volumes that act weird generally contain many files (can be accidental?).
>
> The volumes in question mount without issues, but when I try to go into
> any subdirectory sometimes it works, sometimes I get errors.
>
>
>
> On windows with nfs client: access denied
>
>
>
> In nfslog:
>
> [2014-03-31 13:57:58.771241] I [dht-layout.c:638:dht_layout_normalize]
> 0-caviar_data11-dht: found anomalies in
> <gfid:c8d94120-6851-46ea-9f28-c629a44b1015>. holes=1 overlaps=0
>
> [2014-03-31 13:57:58.771348] E
> [nfs3-helpers.c:3595:nfs3_fh_resolve_inode_lookup_cbk] 0-nfs-nfsv3: Lookup
> failed: <gfid:c8d94120-6851-46ea-9f28-c629a44b1015>: Invalid argument
>
> [2014-03-31 13:57:58.771380] E [nfs3.c:1380:nfs3_lookup_resume]
> 0-nfs-nfsv3: Unable to resolve FH: (192.168.148.46:984) caviar_data11 :
> c8d94120-6851-46ea-9f28-c629a44b1015
>
> [2014-03-31 13:57:58.771819] W [nfs3-helpers.c:3380:nfs3_log_common_res]
> 0-nfs-nfsv3: XID: 1ec28530, LOOKUP: NFS: 22(Invalid argument for
> operation), POSIX: 14(Bad address)
>
> [2014-03-31 13:57:58.798967] I [dht-layout.c:638:dht_layout_normalize]
> 0-caviar_data11-dht: found anomalies in
> <gfid:14972193-1039-4d7a-aed5-0d7e7eccf57b>. holes=1 overlaps=0
>
> [2014-03-31 13:57:58.799039] E
> [nfs3-helpers.c:3595:nfs3_fh_resolve_inode_lookup_cbk] 0-nfs-nfsv3: Lookup
> failed: <gfid:14972193-1039-4d7a-aed5-0d7e7eccf57b>: Invalid argument
>
> [2014-03-31 13:57:58.799056] E [nfs3.c:1380:nfs3_lookup_resume]
> 0-nfs-nfsv3: Unable to resolve FH: (192.168.148.46:984) caviar_data11 :
> 14972193-1039-4d7a-aed5-0d7e7eccf57b
>
> [2014-03-31 13:57:58.799088] W [nfs3-helpers.c:3380:nfs3_log_common_res]
> 0-nfs-nfsv3: XID: 1ec28531, LOOKUP: NFS: 22(Invalid argument for
> operation), POSIX: 14(Bad address)
>
> ....
>
>
>
>
>
> On linux:
>
> [root at lpr-nas01 brick-xiv2]# ll /media/2011/201105/20110530/
>
> ls: /media/2011/201105/20110530/37: No such file or directory
>
> total 332
>
> ...
>
> drwxrwsr-x 2 nfsnobody 1003 4096 Jun  6  2011 32
>
> drwxrwsr-x 2 nfsnobody 1003 4096 Jun  6  2011 34
>
> drwxrwsr-x 2 nfsnobody 1003 4096 Jun  6  2011 35
>
> drwxrwsr-x 2 nfsnobody 1003 4096 Jun  6  2011 36
>
> drwxrwsr-x 2 nfsnobody 1003 4096 Jun  6  2011 37
>
> ...
>
>
>
> [root at lpr-nas01 brick-xiv2]# ll /media/2011/201105/20110530/37
>
> ls: /media/2011/201105/20110530/37/NN.0000073824357.00001.tif: No such
> file or directory
>
> ls: /media/2011/201105/20110530/37/NN.0000073824357.00003.tif: No such
> file or directory
>
> total 54
>
> -rwxrwxr-x 0 nfsnobody 1003  9340 Jun  6  2011 NN.0000073824357.00001.tif
>
> -rwxrwxr-x 1 nfsnobody 1003 35312 Jun  6  2011 NN.0000073824357.00002.tif
>
> -rwxrwxr-x 0 nfsnobody 1003  9340 Jun  6  2011 NN.0000073824357.00003.tif
>
>
>
>
>
> I see in the nfslog:
>
> ...
>
> [2014-03-31 12:44:18.941083] I [dht-layout.c:638:dht_layout_normalize]
> 0-caviar_data11-dht: found anomalies in /2011/201107/20110716/55. holes=1
> overlaps=0
>
> [2014-03-31 12:44:18.958078] I [dht-layout.c:638:dht_layout_normalize]
> 0-caviar_data11-dht: found anomalies in /2011/201107/20110716/30. holes=1
> overlaps=0
>
> [2014-03-31 12:44:18.959980] I [dht-layout.c:638:dht_layout_normalize]
> 0-caviar_data11-dht: found anomalies in /2011/201107/20110716/90. holes=1
> overlaps=0
>
> [2014-03-31 12:44:18.961094] E [dht-helper.c:429:dht_subvol_get_hashed]
> (-->/usr/lib64/glusterfs/3.4.2/xlator/debug/io-stats.so(io_stats_lookup+0x157)
> [0x7fd6a61282e7] (-->/usr/lib64/libglusterfs.so.0(default_lookup+0x6d)
> [0x3dfe01c03d]
> (-->/usr/lib64/glusterfs/3.4.2/xlator/cluster/distribute.so(dht_lookup+0xa7e)
> [0x7fd6a656af2e]))) 0-caviar_data11-dht: invalid argument: loc->parent
>
> [2014-03-31 12:44:18.961283] W
> [client-rpc-fops.c:2624:client3_3_lookup_cbk] 0-caviar_data11-client-0:
> remote operation failed: Invalid argument. Path:
> <gfid:00000000-0000-0000-0000-000000000000>
> (00000000-0000-0000-0000-000000000000)
>
> [2014-03-31 12:44:18.961319] E [acl3.c:334:acl3_getacl_resume] 0-nfs-ACL:
> Unable to resolve FH: (192.168.151.21:740) caviar_data11 :
> 00000000-0000-0000-0000-000000000000
>
> [2014-03-31 12:44:18.961338] E [acl3.c:342:acl3_getacl_resume] 0-nfs-ACL:
> unable to open_and_resume
>
> ...
>
>
>
> The weirdest thing is it changes from time to time which files and
> directories work and which don't
>
> Any ideas?
>
>
>
> Thanks!
>
> ------------------------------
>
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140401/6b2a2df9/attachment.html>


More information about the Gluster-users mailing list