[Gluster-users] NFS ganesha client not showing files after crash
Alan Hartless
harty83 at gmail.com
Mon Jun 6 02:50:36 UTC 2016
Hi Jiffin,
Thanks! I have 3.7.11-ubuntu1~trusty1 installed and using NFSv4 mount
protocols.
Doing a forced lookup lists the root directories but shows 0 files in each.
Thanks!
Alan
On Fri, Jun 3, 2016 at 3:09 AM Jiffin Tony Thottan <jthottan at redhat.com>
wrote:
> Hi Alan,
>
> I try to reproduce issue with my set up and get back to u.
>
> can u please mention mount protocol and gluster package version(3.7-?)
> Incase if u can't find /var/log/ganesha.log(it is default location for
> fedora and centos),
> Just the system log messages and grep for ganesha.
>
> Also can try to perform force lookup on directory using "ls <dirname>/*
> -ltr"
>
> --
> Jiffin
>
>
> On 02/06/16 00:16, Alan Hartless wrote:
>
> Yes, I had a brick that I restored and so it had existing files. After the
> crash, it wouldn't let me re-add it because it said the files were already
> part of a gluster. So I followed
> https://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ to
> reset it.
>
> Also correct that I can access all files through fuse but only the root
> directory via ganesha NFS4 or any directories/files that have since been
> created.
>
> Using a forced lookup on a specific file, I found that I can reach it and
> even edit it. But a ls or dir will not list it or any of it's parent
> directories. Even after editing the file, it does not list with ls.
>
> I'm using gluster 3.7 and ganesha 2.3 from Gluster's Ubuntu repositories.
>
> I don't have a /var/log/ganesha.log but I do /var/log/ganesha-gfapi.log.
> I tailed it while restarting ganesha and got this for the specific volume:
>
> [2016-06-01 18:44:44.876385] I [MSGID: 114020] [client.c:2106:notify]
> 0-letsencrypt-client-0: parent translators are ready, attempting connect on
> transport
> [2016-06-01 18:44:44.876903] I [MSGID: 114020] [client.c:2106:notify]
> 0-letsencrypt-client-1: parent translators are ready, attempting connect on
> transport
> [2016-06-01 18:44:44.877193] I [rpc-clnt.c:1868:rpc_clnt_reconfig]
> 0-letsencrypt-client-0: changing port to 49154 (from 0)
> [2016-06-01 18:44:44.877837] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-letsencrypt-client-0: Using Program GlusterFS 3.3, Num (1298437), Version
> (330)
> [2016-06-01 18:44:44.878234] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-letsencrypt-client-0:
> Connected to letsencrypt-client-0, attached to remote volume
> '/gluster_volume/letsencrypt'.
> [2016-06-01 18:44:44.878253] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-letsencrypt-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
> [2016-06-01 18:44:44.878338] I [MSGID: 108005]
> [afr-common.c:4007:afr_notify] 0-letsencrypt-replicate-0: Subvolume
> 'letsencrypt-client-0' came back up; going online.
> [2016-06-01 18:44:44.878390] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-letsencrypt-client-0:
> Server lk version = 1
> [2016-06-01 18:44:44.878505] I [rpc-clnt.c:1868:rpc_clnt_reconfig]
> 0-letsencrypt-client-1: changing port to 49154 (from 0)
> [2016-06-01 18:44:44.879568] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-letsencrypt-client-1: Using Program GlusterFS 3.3, Num (1298437), Version
> (330)
> [2016-06-01 18:44:44.880155] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-letsencrypt-client-1:
> Connected to letsencrypt-client-1, attached to remote volume
> '/gluster_volume/letsencrypt'.
> [2016-06-01 18:44:44.880175] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-letsencrypt-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
> [2016-06-01 18:44:44.896801] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-letsencrypt-client-1:
> Server lk version = 1
> [2016-06-01 18:44:44.898290] I [MSGID: 108031]
> [afr-common.c:1900:afr_local_discovery_cbk] 0-letsencrypt-replicate-0:
> selecting local read_child letsencrypt-client-0
> [2016-06-01 18:44:44.898798] I [MSGID: 104041]
> [glfs-resolve.c:869:__glfs_active_subvol] 0-letsencrypt: switched to graph
> 676c7573-7465-7266-732d-6e6f64652d63 (0)
> [2016-06-01 18:44:45.913545] I [MSGID: 104045] [glfs-master.c:95:notify]
> 0-gfapi: New graph 676c7573-7465-7266-732d-6e6f64652d63 (0) coming up
>
> I also tailed it while accessing files through a mount point but nothing
> was logged.
>
> This is the ganesha config for the specific volume I'm testing with. I
> have others but they are the same except for export ID and the paths.
>
> EXPORT
> {
> Export_Id = 3;
> Path = "/letsencrypt";
> Pseudo = "/letsencrypt";
> FSAL {
> name = GLUSTER;
> hostname = "localhost";
> volume = "letsencrypt";
> }
> Access_type = RW;
> Squash = No_root_squash;
> Disable_ACL = TRUE;
> }
>
> Many thanks!
>
>
> On Sun, May 29, 2016 at 12:46 PM Jiffin Tony Thottan <jthottan at redhat.com>
> wrote:
>
>>
>>
>> On 28/05/16 08:07, Alan Hartless wrote:
>>
>> I had everything working well when I had a complete melt down :-) Well
>> got all that sorted and everything back up and running or so I thought. Now
>> NFS ganesha is not showing any existing files but the root level of the
>> brick. It's empty for all subdirectories. New files or directories added
>> show up as well. Everything shows up when using the fuse client.
>>
>>
>> If I understand your issue correctly
>> * You have created a volume using brick which contains pre existing file
>> and directories
>> * When you tried to access the files via ganesha, it does not show up.
>> But with fuse it is visible.
>>
>> Can please try to perform force lookup on the directories/files(ls <path
>> to directory/file>) from the ganesha mount?
>> Also check the ganesha logs (/var/log/ganesha.log and
>> /var/log/ganesha-gfapi.log) for clues.
>> IMO there was similar issue exists for older version of ganesha(v2.1 I
>> guess). if possible can you also share
>> the ganesha configuration for that volume
>>
>> I've tried self healing, editing files, etc but the issue persists. If I
>> move the folders and back, they show up. But I have a live setup and can't
>> afford the time to move GBs of data to a new location and back. Is there
>> anything I can do to trigger something for the files to show up in NFS
>> again without having to move directories?
>>
>> Thanks,
>> Alan
>>
>>
>> _______________________________________________
>> Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160606/e6484ad1/attachment.html>
More information about the Gluster-users
mailing list