It sounds like a bug.<div id="yMail_cursorElementTracker_1631082053825">Open a github issue for the Ganesha and Gluster (related to each other), so maintainers can identify the problem</div><div id="yMail_cursorElementTracker_1631082097092"><br></div><div id="yMail_cursorElementTracker_1631082097247"><br></div><div id="yMail_cursorElementTracker_1631082097442">Best Regards,</div><div id="yMail_cursorElementTracker_1631082100356">Strahil Nikolov<br> <br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Tue, Sep 7, 2021 at 16:27, John Cholewa</div><div><jcholewa@gmail.com> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> Update to this:  The problem was resolved when I explicitly mounted<br clear="none">nfsvers=3 .  I may come back to this to see if there's a reason why<br clear="none">it's behaving like that with nfs4, but I'll need to deal with fallout<br clear="none">for a while first.<br clear="none"><div class="yqt0723324820" id="yqtfd30627"><br clear="none">On Mon, Sep 6, 2021 at 2:37 PM John Cholewa <<a shape="rect" ymailto="mailto:jcholewa@gmail.com" href="mailto:jcholewa@gmail.com">jcholewa@gmail.com</a>> wrote:<br clear="none">><br clear="none">> My distributed volume had an issue on Friday which required a reboot<br clear="none">> of the primary node. After this, I'm having a strange issue: When I<br clear="none">> have the volume mounted via ganesha-nfs, using either the primary node<br clear="none">> itself or a random workstation on the network, I'm seeing files from<br clear="none">> both volumes, but I'm not seeing any directories at all. It's just a<br clear="none">> listing of the files. But I *can* list the contents of a directory if<br clear="none">> I know it exists. Similarly, that will show the files (in both nodes)<br clear="none">> of that directory, but it will show no subdirectories. Example:<br clear="none">><br clear="none">> $ ls -F /mnt<br clear="none">> flintstone/<br clear="none">><br clear="none">> $ ls -F /mnt/flintstone<br clear="none">> test test1 test2 test3<br clear="none">><br clear="none">> $ ls -F /mnt/flintstone/wilma<br clear="none">> file1 file2 file3<br clear="none">><br clear="none">> I've tried restarting glusterd on both nodes and rebooting the other<br clear="none">> node as well. Mount options in fstab are defaults,_netdev,nofail. I<br clear="none">> tried temporarily disabling the firewall in case that was a<br clear="none">> contributing factor.<br clear="none">><br clear="none">> This has been working pretty well for over two years, and it's<br clear="none">> survived system updates and reboots on the nodes, and there hasn't<br clear="none">> been a recent software update that would have triggered this. The data<br clear="none">> itself appears to be fine. 'gluster peer status' on each node shows<br clear="none">> that the other is connected.<br clear="none">><br clear="none">> What's a good way to further troubleshoot this or to tell gluster to<br clear="none">> figure itself out?  Would "gluster volume reset"  bring the<br clear="none">> configuration to its original state without damaging the data in the<br clear="none">> bricks?  Is there something I should look out for in the logs that<br clear="none">> might give a clue?<br clear="none">><br clear="none">> Outputs:<br clear="none">><br clear="none">> # lsb_release -a<br clear="none">> No LSB modules are available.<br clear="none">> Distributor ID: Ubuntu<br clear="none">> Description:    Ubuntu 18.04.4 LTS<br clear="none">> Release:        18.04<br clear="none">> Codename:       bionic<br clear="none">><br clear="none">><br clear="none">> # gluster --version<br clear="none">> glusterfs 7.5<br clear="none">> Repository revision: git://git.gluster.org/glusterfs.git<br clear="none">> Copyright (c) 2006-2016 Red Hat, Inc. <<a shape="rect" href="https://www.gluster.org/" target="_blank">https://www.gluster.org/</a>><br clear="none">> GlusterFS comes with ABSOLUTELY NO WARRANTY.<br clear="none">> It is licensed to you under your choice of the GNU Lesser<br clear="none">> General Public License, version 3 or any later version (LGPLv3<br clear="none">> or later), or the GNU General Public License, version 2 (GPLv2),<br clear="none">> in all cases as published by the Free Software Foundation.<br clear="none">><br clear="none">><br clear="none">> # gluster volume status<br clear="none">> Status of volume: gv0<br clear="none">> Gluster process                             TCP Port  RDMA Port  Online  Pid<br clear="none">> ------------------------------------------------------------------------------<br clear="none">> Brick yuzz:/gfs/brick1/gv0                  N/A       N/A        Y       2909<br clear="none">> Brick wum:/gfs/brick1/gv0                   49152     0          Y       2885<br clear="none">><br clear="none">> Task Status of Volume gv0<br clear="none">> ------------------------------------------------------------------------------<br clear="none">> There are no active volume tasks<br clear="none">><br clear="none">><br clear="none">> # gluster volume info<br clear="none">> Volume Name: gv0<br clear="none">> Type: Distribute<br clear="none">> Volume ID: dcfdeed9-8fe9-4047-b18a-1a908f003d7f<br clear="none">> Status: Started<br clear="none">> Snapshot Count: 0<br clear="none">> Number of Bricks: 2<br clear="none">> Transport-type: tcp<br clear="none">> Bricks:<br clear="none">> Brick1: yuzz:/gfs/brick1/gv0<br clear="none">> Brick2: wum:/gfs/brick1/gv0<br clear="none">> Options Reconfigured:<br clear="none">> nfs.disable: on<br clear="none">> transport.address-family: inet<br clear="none">> features.cache-invalidation: on<br clear="none">> cluster.readdir-optimize: off<br clear="none">> performance.parallel-readdir: off<br clear="none">> performance.cache-size: 8GB<br clear="none">> network.inode-lru-limit: 1000000<br clear="none">> performance.nfs.stat-prefetch: off<br clear="none">><br clear="none">><br clear="none">> # gluster pool list<br clear="none">> UUID                                    Hostname        State<br clear="none">> 4b84240e-e73a-46da-9271-72f6001a8e18    wum             Connected<br clear="none">> 7de76707-cd99-4916-9c6b-ac6f26bda373    localhost       Connected<br clear="none">><br clear="none">><br clear="none">> Output of gluster get-state:<br clear="none">> >>>>><br clear="none">> [Global]<br clear="none">> MYUUID: 7de76707-cd99-4916-9c6b-ac6f26bda373<br clear="none">> op-version: 31302<br clear="none">><br clear="none">> [Global options]<br clear="none">><br clear="none">> [Peers]<br clear="none">> Peer1.primary_hostname: wum<br clear="none">> Peer1.uuid: 4b84240e-e73a-46da-9271-72f6001a8e18<br clear="none">> Peer1.state: Peer in Cluster<br clear="none">> Peer1.connected: Connected<br clear="none">> Peer1.othernames:<br clear="none">><br clear="none">> [Volumes]<br clear="none">> Volume1.name: gv0<br clear="none">> Volume1.id: dcfdeed9-8fe9-4047-b18a-1a908f003d7f<br clear="none">> Volume1.type: Distribute<br clear="none">> Volume1.transport_type: tcp<br clear="none">> Volume1.status: Started<br clear="none">> Volume1.brickcount: 2<br clear="none">> Volume1.Brick1.path: yuzz:/gfs/brick1/gv0<br clear="none">> Volume1.Brick1.hostname: yuzz<br clear="none">> Volume1.Brick1.port: 0<br clear="none">> Volume1.Brick1.rdma_port: 0<br clear="none">> Volume1.Brick1.status: Started<br clear="none">> Volume1.Brick1.spacefree: 72715274395648Bytes<br clear="none">> Volume1.Brick1.spacetotal: 196003244277760Bytes<br clear="none">> Volume1.Brick2.path: wum:/gfs/brick1/gv0<br clear="none">> Volume1.Brick2.hostname: wum<br clear="none">> Volume1.snap_count: 0<br clear="none">> Volume1.stripe_count: 1<br clear="none">> Volume1.replica_count: 1<br clear="none">> Volume1.subvol_count: 2<br clear="none">> Volume1.arbiter_count: 0<br clear="none">> Volume1.disperse_count: 0<br clear="none">> Volume1.redundancy_count: 0<br clear="none">> Volume1.quorum_status: not_applicable<br clear="none">> Volume1.snapd_svc.online_status: Offline<br clear="none">> Volume1.snapd_svc.inited: True<br clear="none">> Volume1.rebalance.id: 00000000-0000-0000-0000-000000000000<br clear="none">> Volume1.rebalance.status: not_started<br clear="none">> Volume1.rebalance.failures: 0<br clear="none">> Volume1.rebalance.skipped: 0<br clear="none">> Volume1.rebalance.lookedup: 0<br clear="none">> Volume1.rebalance.files: 0<br clear="none">> Volume1.rebalance.data: 0Bytes<br clear="none">> Volume1.time_left: 0<br clear="none">> Volume1.gsync_count: 0<br clear="none">> Volume1.options.nfs.disable: on<br clear="none">> Volume1.options.transport.address-family: inet<br clear="none">> Volume1.options.features.cache-invalidation: on<br clear="none">> Volume1.options.cluster.readdir-optimize: off<br clear="none">> Volume1.options.performance.parallel-readdir: off<br clear="none">> Volume1.options.performance.cache-size: 8GB<br clear="none">> Volume1.options.network.inode-lru-limit: 1000000<br clear="none">> Volume1.options.performance.nfs.stat-prefetch: off<br clear="none">><br clear="none">><br clear="none">> [Services]<br clear="none">> svc1.name: glustershd<br clear="none">> svc1.online_status: Offline<br clear="none">><br clear="none">> svc2.name: nfs<br clear="none">> svc2.online_status: Offline<br clear="none">><br clear="none">> svc3.name: bitd<br clear="none">> svc3.online_status: Offline<br clear="none">><br clear="none">> svc4.name: scrub<br clear="none">> svc4.online_status: Offline<br clear="none">><br clear="none">> svc5.name: quotad<br clear="none">> svc5.online_status: Offline<br clear="none">><br clear="none">><br clear="none">> [Misc]<br clear="none">> Base port: 49152<br clear="none">> Last allocated port: 49152<br clear="none">> <<<<<<br clear="none">________<br clear="none"><br clear="none"><br clear="none"><br clear="none">Community Meeting Calendar:<br clear="none"><br clear="none">Schedule -<br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">Bridge: <a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">Gluster-users mailing list<br clear="none"><a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none"><a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none"></div> </div> </blockquote></div>