[Bugs] [Bug 1187474] New: Disperse volume mounted through NFS doesn't list any files/directories
bugzilla at redhat.com
bugzilla at redhat.com
Fri Jan 30 07:00:27 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1187474
Bug ID: 1187474
Summary: Disperse volume mounted through NFS doesn't list any
files/directories
Product: GlusterFS
Version: mainline
Component: disperse
Assignee: bugs at gluster.org
Reporter: byarlaga at redhat.com
CC: bugs at gluster.org, gluster-bugs at redhat.com
Created attachment 985858
--> https://bugzilla.redhat.com/attachment.cgi?id=985858&action=edit
volume statedumps and sos reports
Description of problem:
=======================
On a RHEL6.6 client, created two mounts, fuse and nfs. All the files and
directories are created using fuse mount and when tried to list them using NFS,
it doesn't show anything. But if absolute path of the file or directory is
given it lists that.
Version-Release number of selected component (if applicable):
=============================================================
glusterfs 3.7dev built on Jan 29 2015 01:05:44
Location from which the packages are used:
==========================================
http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.545.git88136b5.autobuild/
GlusterFS Cluster Information:
==============================
# Number of volumes : 1
# Volume Names : testvol
# Volume on which the particular issue is seen [ if applicable ]: testvol
# Type of volumes : Disperse
# Volume options if available :
Option Value
------ -----
cluster.lookup-unhashed on
cluster.min-free-disk 10%
cluster.min-free-inodes 5%
cluster.rebalance-stats off
cluster.subvols-per-directory (null)
cluster.readdir-optimize off
cluster.rsync-hash-regex (null)
cluster.extra-hash-regex (null)
cluster.dht-xattr-name trusted.glusterfs.dht
cluster.randomize-hash-range-by-gfid off
cluster.local-volume-name (null)
cluster.weighted-rebalance on
cluster.switch-pattern (null)
cluster.entry-change-log on
cluster.read-subvolume (null)
cluster.read-subvolume-index -1
cluster.read-hash-mode 1
cluster.background-self-heal-count 16
cluster.metadata-self-heal on
cluster.data-self-heal on
cluster.entry-self-heal on
cluster.self-heal-daemon on
cluster.self-heal-window-size 1
cluster.data-change-log on
cluster.metadata-change-log on
cluster.data-self-heal-algorithm (null)
cluster.eager-lock on
cluster.quorum-type none
cluster.quorum-count (null)
cluster.choose-local true
cluster.self-heal-readdir-size 1KB
cluster.post-op-delay-secs 1
cluster.ensure-durability on
cluster.stripe-block-size 128KB
cluster.stripe-coalesce true
diagnostics.latency-measurement off
diagnostics.dump-fd-stats off
diagnostics.count-fop-hits off
diagnostics.brick-log-level INFO
diagnostics.client-log-level INFO
diagnostics.brick-sys-log-level CRITICAL
diagnostics.client-sys-log-level CRITICAL
diagnostics.brick-logger (null)
diagnostics.client-logger (null)
diagnostics.brick-log-format (null)
diagnostics.client-log-format (null)
diagnostics.brick-log-buf-size 5
diagnostics.client-log-buf-size 5
diagnostics.brick-log-flush-timeout 120
diagnostics.client-log-flush-timeout 120
performance.cache-max-file-size 0
performance.cache-min-file-size 0
performance.cache-refresh-timeout 1
performance.cache-priority
performance.cache-size 32MB
performance.io-thread-count 16
performance.high-prio-threads 16
performance.normal-prio-threads 16
performance.low-prio-threads 16
performance.least-prio-threads 1
performance.enable-least-priority on
performance.least-rate-limit 0
performance.cache-size 128MB
performance.flush-behind on
performance.nfs.flush-behind on
performance.write-behind-window-size 1MB
performance.nfs.write-behind-window-size1MB
performance.strict-o-direct off
performance.nfs.strict-o-direct off
performance.strict-write-ordering off
performance.nfs.strict-write-ordering off
performance.lazy-open yes
performance.read-after-open no
performance.read-ahead-page-count 4
performance.md-cache-timeout 1
features.encryption off
encryption.master-key (null)
encryption.data-key-size 256
encryption.block-size 4096
network.frame-timeout 1800
network.ping-timeout 42
network.tcp-window-size (null)
features.lock-heal off
features.grace-timeout 10
network.remote-dio disable
network.tcp-window-size (null)
network.inode-lru-limit 16384
auth.allow *
auth.reject (null)
transport.keepalive (null)
server.allow-insecure (null)
server.root-squash off
server.anonuid 65534
server.anongid 65534
server.statedump-path /var/run/gluster
server.outstanding-rpc-limit 64
features.lock-heal off
features.grace-timeout (null)
server.ssl (null)
auth.ssl-allow *
server.manage-gids off
client.send-gids on
server.gid-timeout 2
server.own-thread (null)
performance.write-behind on
performance.read-ahead on
performance.readdir-ahead off
performance.io-cache on
performance.quick-read on
performance.open-behind on
performance.stat-prefetch on
performance.client-io-threads off
performance.nfs.write-behind on
performance.nfs.read-ahead off
performance.nfs.io-cache off
performance.nfs.quick-read off
performance.nfs.stat-prefetch off
performance.nfs.io-threads off
performance.force-readdirp true
features.file-snapshot off
features.uss off
features.snapshot-directory .snaps
features.show-snapshot-directory off
network.compression off
network.compression.window-size -15
network.compression.mem-level 8
network.compression.min-size 0
network.compression.compression-level -1
network.compression.debug false
features.limit-usage (null)
features.quota-timeout 0
features.default-soft-limit 80%
features.soft-timeout 60
features.hard-timeout 5
features.alert-time 86400
features.quota-deem-statfs off
geo-replication.indexing off
geo-replication.indexing off
geo-replication.ignore-pid-check off
geo-replication.ignore-pid-check off
features.quota on
debug.trace off
debug.log-history no
debug.log-file no
debug.exclude-ops (null)
debug.include-ops (null)
debug.error-gen off
debug.error-failure (null)
debug.error-number (null)
debug.random-failure off
debug.error-fops (null)
nfs.enable-ino32 no
nfs.mem-factor 15
nfs.export-dirs on
nfs.export-volumes on
nfs.addr-namelookup off
nfs.dynamic-volumes off
nfs.register-with-portmap on
nfs.outstanding-rpc-limit 16
nfs.port 2049
nfs.rpc-auth-unix on
nfs.rpc-auth-null on
nfs.rpc-auth-allow all
nfs.rpc-auth-reject none
nfs.ports-insecure off
nfs.trusted-sync off
nfs.trusted-write off
nfs.volume-access read-write
nfs.export-dir
nfs.disable false
nfs.nlm on
nfs.acl on
nfs.mount-udp off
nfs.mount-rmtab /var/lib/glusterd/nfs/rmtab
nfs.rpc-statd /sbin/rpc.statd
nfs.server-aux-gids off
nfs.drc off
nfs.drc-size 0x20000
nfs.read-size (1 * 1048576ULL)
nfs.write-size (1 * 1048576ULL)
nfs.readdir-size (1 * 1048576ULL)
features.read-only off
features.worm off
storage.linux-aio off
storage.batch-fsync-mode reverse-fsync
storage.batch-fsync-delay-usec 0
storage.owner-uid -1
storage.owner-gid -1
storage.node-uuid-pathinfo off
storage.health-check-interval 30
storage.build-pgfid off
storage.bd-aio off
cluster.server-quorum-type off
cluster.server-quorum-ratio 0
changelog.changelog off
changelog.changelog-dir (null)
changelog.encoding ascii
changelog.rollover-time 15
changelog.fsync-interval 5
changelog.changelog-barrier-timeout 120
features.barrier disable
features.barrier-timeout 120
locks.trace disable
cluster.disperse-self-heal-daemon enable
# Output of gluster volume info
[root at dhcp37-183 ~]# gluster volume info
Volume Name: testvol
Type: Disperse
Volume ID: ad1a31fb-2e69-4d5d-9ae0-d057879b8fd5
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: dhcp37-120:/rhs/brick1/b1
Brick2: dhcp37-208:/rhs/brick1/b1
Brick3: dhcp37-178:/rhs/brick1/b1
Brick4: dhcp37-183:/rhs/brick1/b1
Brick5: dhcp37-120:/rhs/brick2/b2
Brick6: dhcp37-208:/rhs/brick2/b2
Options Reconfigured:
features.uss: off
features.quota: on
# Output of gluster volume status
[root at dhcp37-183 ~]# gluster volume status
Status of volume: testvol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick dhcp37-120:/rhs/brick1/b1 49152 Y 4946
Brick dhcp37-208:/rhs/brick1/b1 49156 Y 30577
Brick dhcp37-178:/rhs/brick1/b1 49156 Y 30675
Brick dhcp37-183:/rhs/brick1/b1 49156 Y 30439
Brick dhcp37-120:/rhs/brick2/b2 49153 Y 4957
Brick dhcp37-208:/rhs/brick2/b2 49157 Y 30588
NFS Server on localhost 2049 Y 25102
Quota Daemon on localhost N/A Y 30515
NFS Server on 10.70.37.120 2049 Y 313
Quota Daemon on 10.70.37.120 N/A Y 5128
NFS Server on dhcp37-178 2049 Y 25290
Quota Daemon on dhcp37-178 N/A Y 30751
NFS Server on dhcp37-208 2049 Y 25329
Quota Daemon on dhcp37-208 N/A Y 30678
Task Status of Volume testvol
------------------------------------------------------------------------------
There are no active volume tasks
# statedump and sosreports will be attached.
How reproducible:
100%
Steps to Reproduce:
1. Fuse mount and created some files and directories
2. NFS mount and try to list the created files with 'ls'
Actual results:
Files should get displayed irrespective of the mount type
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list