[Gluster-users] Glusterfs nfs mounts not showing directories

Strahil Nikolov hunter86_bg at yahoo.com
Tue Sep 7 05:28:49 UTC 2021


Do you see any issues in the logs ?
Is it only with Ganesha ?
Best Regards,Strahil Nikolov
 
 
  On Mon, Sep 6, 2021 at 21:37, John Cholewa<jcholewa at gmail.com> wrote:   My distributed volume had an issue on Friday which required a reboot
of the primary node. After this, I'm having a strange issue: When I
have the volume mounted via ganesha-nfs, using either the primary node
itself or a random workstation on the network, I'm seeing files from
both volumes, but I'm not seeing any directories at all. It's just a
listing of the files. But I *can* list the contents of a directory if
I know it exists. Similarly, that will show the files (in both nodes)
of that directory, but it will show no subdirectories. Example:

$ ls -F /mnt
flintstone/

$ ls -F /mnt/flintstone
test test1 test2 test3

$ ls -F /mnt/flintstone/wilma
file1 file2 file3

I've tried restarting glusterd on both nodes and rebooting the other
node as well. Mount options in fstab are defaults,_netdev,nofail. I
tried temporarily disabling the firewall in case that was a
contributing factor.

This has been working pretty well for over two years, and it's
survived system updates and reboots on the nodes, and there hasn't
been a recent software update that would have triggered this. The data
itself appears to be fine. 'gluster peer status' on each node shows
that the other is connected.

What's a good way to further troubleshoot this or to tell gluster to
figure itself out?  Would "gluster volume reset"  bring the
configuration to its original state without damaging the data in the
bricks?  Is there something I should look out for in the logs that
might give a clue?

Outputs:

# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.4 LTS
Release:        18.04
Codename:      bionic


# gluster --version
glusterfs 7.5
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


# gluster volume status
Status of volume: gv0
Gluster process                            TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick yuzz:/gfs/brick1/gv0                  N/A      N/A        Y      2909
Brick wum:/gfs/brick1/gv0                  49152    0          Y      2885

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks


# gluster volume info
Volume Name: gv0
Type: Distribute
Volume ID: dcfdeed9-8fe9-4047-b18a-1a908f003d7f
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: yuzz:/gfs/brick1/gv0
Brick2: wum:/gfs/brick1/gv0
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
features.cache-invalidation: on
cluster.readdir-optimize: off
performance.parallel-readdir: off
performance.cache-size: 8GB
network.inode-lru-limit: 1000000
performance.nfs.stat-prefetch: off


# gluster pool list
UUID                                    Hostname        State
4b84240e-e73a-46da-9271-72f6001a8e18    wum            Connected
7de76707-cd99-4916-9c6b-ac6f26bda373    localhost      Connected


Output of gluster get-state:
>>>>>
[Global]
MYUUID: 7de76707-cd99-4916-9c6b-ac6f26bda373
op-version: 31302

[Global options]

[Peers]
Peer1.primary_hostname: wum
Peer1.uuid: 4b84240e-e73a-46da-9271-72f6001a8e18
Peer1.state: Peer in Cluster
Peer1.connected: Connected
Peer1.othernames:

[Volumes]
Volume1.name: gv0
Volume1.id: dcfdeed9-8fe9-4047-b18a-1a908f003d7f
Volume1.type: Distribute
Volume1.transport_type: tcp
Volume1.status: Started
Volume1.brickcount: 2
Volume1.Brick1.path: yuzz:/gfs/brick1/gv0
Volume1.Brick1.hostname: yuzz
Volume1.Brick1.port: 0
Volume1.Brick1.rdma_port: 0
Volume1.Brick1.status: Started
Volume1.Brick1.spacefree: 72715274395648Bytes
Volume1.Brick1.spacetotal: 196003244277760Bytes
Volume1.Brick2.path: wum:/gfs/brick1/gv0
Volume1.Brick2.hostname: wum
Volume1.snap_count: 0
Volume1.stripe_count: 1
Volume1.replica_count: 1
Volume1.subvol_count: 2
Volume1.arbiter_count: 0
Volume1.disperse_count: 0
Volume1.redundancy_count: 0
Volume1.quorum_status: not_applicable
Volume1.snapd_svc.online_status: Offline
Volume1.snapd_svc.inited: True
Volume1.rebalance.id: 00000000-0000-0000-0000-000000000000
Volume1.rebalance.status: not_started
Volume1.rebalance.failures: 0
Volume1.rebalance.skipped: 0
Volume1.rebalance.lookedup: 0
Volume1.rebalance.files: 0
Volume1.rebalance.data: 0Bytes
Volume1.time_left: 0
Volume1.gsync_count: 0
Volume1.options.nfs.disable: on
Volume1.options.transport.address-family: inet
Volume1.options.features.cache-invalidation: on
Volume1.options.cluster.readdir-optimize: off
Volume1.options.performance.parallel-readdir: off
Volume1.options.performance.cache-size: 8GB
Volume1.options.network.inode-lru-limit: 1000000
Volume1.options.performance.nfs.stat-prefetch: off


[Services]
svc1.name: glustershd
svc1.online_status: Offline

svc2.name: nfs
svc2.online_status: Offline

svc3.name: bitd
svc3.online_status: Offline

svc4.name: scrub
svc4.online_status: Offline

svc5.name: quotad
svc5.online_status: Offline


[Misc]
Base port: 49152
Last allocated port: 49152
<<<<<
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210907/0569cd8e/attachment.html>


More information about the Gluster-users mailing list