[Bugs] [Bug 1447266] [snapshot cifs]ls on .snaps directory is throwing input/ output error over cifs mount

bugzilla at redhat.com bugzilla at redhat.com
Tue May 2 09:33:21 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1447266



--- Comment #2 from Mohammed Rafi KC <rkavunga at redhat.com> ---
Description of problem:
ls on .snaps directory is throwing input/output error over cifs mount.
Listing works on fuse mount and windows mount.

Version-Release number of selected component (if applicable):
mainline

How reproducible:
Always

Steps to Reproduce:
1.Enable uss & VSS plugins
2.Mount cifs on a samba ctdb gluster cluster
3.cd to the mount point
4. Do ll or ls over the cifs mount

Actual results:
ls: reading directory .snaps/: Input/output error

Expected results:
Should list out contents

Volume Name: saturday-saturday
Type: Distributed-Replicate
Volume ID: 4a24c34c-1144-4f07-9763-6e232c037a67
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Options Reconfigured:
features.show-snapshot-directory: enable
features.uss: enable
transport.address-family: inet
nfs.disable: on
server.allow-insecure: on
performance.stat-prefetch: on
storage.batch-fsync-delay-usec: 0
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 50000
performance.cache-samba-metadata: on
performance.parallel-readdir: on

smb.conf
---------

[gluster-saturday-saturday]
comment = For samba share of volume saturday-saturday
#vfs objects = glusterfs
vfs objects = shadow_copy2 glusterfs
glusterfs:volume = saturday-saturday
glusterfs:logfile = /var/log/samba/glusterfs-saturday-saturday.%M.log
glusterfs:loglevel = 9
shadow:snapdir = /.snaps
shadow:basedir = /
shadow:sort = desc
shadow:sscanf = false
#shadow:delimeter = _123john_cena_GMT
#shadow:snapprefix = [abc]
shadow:format = snap1_GMT-%Y.%m.%d-%H.%M.%S
path = /
read only = no
guest ok = yes

--- Additional comment from Anoop C S on 2017-04-18 07:45:40 EDT ---

I was able reproduce the bug easily. Following entries were seen in Samba logs:

[2017/04/12 15:42:25.377261,  5, pid=11231, effective(0, 0), real(0, 0)]
../source3/smbd/filename.c:644(unix_convert)
  unix_convert begin: name = .snaps/*, dirpath = .snaps, start = *
[2017/04/12 15:42:25.378556,  5, pid=11231, effective(0, 0), real(0, 0)]
../source3/smbd/filename.c:218(check_parent_exists)
  check_parent_exists: name = .snaps/*, dirpath = .snaps, start = *
[2017/04/12 15:42:25.378605, 10, pid=11231, effective(0, 0), real(0, 0)]
../source3/smbd/mangle_hash2.c:418(is_mangled)
  is_mangled * ?
[2017/04/12 15:42:25.378630, 10, pid=11231, effective(0, 0), real(0, 0)]
../source3/smbd/mangle_hash2.c:357(is_mangled_component)
  is_mangled_component * (len 1) ?
[2017/04/12 15:42:25.378653,  5, pid=11231, effective(0, 0), real(0, 0)]
../source3/smbd/filename.c:847(unix_convert)
  Wildcard *
[2017/04/12 15:42:25.378676, 10, pid=11231, effective(0, 0), real(0, 0),
class=vfs] ../source3/smbd/vfs.c:1199(check_reduced_name)
  check_reduced_name: check_reduced_name [.snaps/*] [/]
[2017/04/12 15:42:25.380977, 10, pid=11231, effective(0, 0), real(0, 0),
class=vfs] ../source3/smbd/vfs.c:1259(check_reduced_name)
  check_reduced_name realpath [.snaps/*] -> [/.snaps/*]
[2017/04/12 15:42:25.380999,  5, pid=11231, effective(0, 0), real(0, 0),
class=vfs] ../source3/smbd/vfs.c:1370(check_reduced_name)
  check_reduced_name: .snaps/* reduced to /.snaps/*
[2017/04/12 15:42:25.381010,  5, pid=11231, effective(0, 0), real(0, 0)]
../source3/smbd/trans2.c:2741(call_trans2findfirst)
  dir=.snaps, mask = *
[2017/04/12 15:42:25.381026,  5, pid=11231, effective(0, 0), real(0, 0)]
../source3/smbd/dir.c:474(dptr_create)
  dptr_create dir=.snaps
[2017/04/12 15:42:25.381499, 10, pid=11231, effective(0, 0), real(0, 0)]
../source3/smbd/open.c:100(smbd_check_access_rights)
  smbd_check_access_rights: root override on .snaps. Granting 0x1
[2017/04/12 15:42:25.383028,  4, pid=11231, effective(0, 0), real(0, 0),
class=vfs] ../source3/smbd/vfs.c:874(vfs_ChDir)
  vfs_ChDir to .snaps
[2017/04/12 15:42:25.383966,  1, pid=11231, effective(0, 0), real(0, 0),
class=vfs] ../source3/smbd/vfs.c:921(vfs_GetWd)
  vfs_GetWd: couldn't stat "." error No such file or directory (NFS problem ?)
[2017/04/12 15:42:25.383985,  4, pid=11231, effective(0, 0), real(0, 0),
class=vfs] ../source3/smbd/vfs.c:885(vfs_ChDir)
  vfs_ChDir got /.snaps
[2017/04/12 15:42:25.383998, 10, pid=11231, effective(0, 0), real(0, 0),
class=vfs] ../source3/smbd/vfs.c:1199(check_reduced_name)
  check_reduced_name: check_reduced_name [.] [/]
[2017/04/12 15:42:25.385272,  3, pid=11231, effective(0, 0), real(0, 0),
class=vfs] ../source3/smbd/vfs.c:1239(check_reduced_name)
  check_reduce_name: couldn't get realpath for .
(NT_STATUS_OBJECT_PATH_NOT_FOUND)
[2017/04/12 15:42:25.385286,  5, pid=11231, effective(0, 0), real(0, 0)]
../source3/smbd/filename.c:1248(check_name)
  check_name: name . failed with NT_STATUS_OBJECT_PATH_NOT_FOUND

Judging from the above log it seems that stat and realpath vfs calls to
glusterfs after changing directory to .snaps failed. Corresponding glusterfs
client log entries:

[2017-04-12 10:12:25.379506] D [MSGID: 0]
[client-rpc-fops.c:2936:client3_3_lookup_cbk] 0-stack-trace: stack-address:
0x55b8761a29b0, xcube-snapd-client returned -1 error: No such file or directory
[No such file or directory]
[2017-04-12 10:12:25.379535] D [snapview-client.c:289:gf_svc_lookup_cbk]
0-xcube-snapview-client: Lookup failed on snapview graph with error No such
file or directory
[2017-04-12 10:12:25.379554] D [MSGID: 0]
[snapview-client.c:329:gf_svc_lookup_cbk] 0-stack-trace: stack-address:
0x55b8761a29b0, xcube-snapview-client returned -1 error: No such file or
directory [No such file or directory]
[2017-04-12 10:12:25.379581] D [MSGID: 0] [io-stats.c:2213:io_stats_lookup_cbk]
0-stack-trace: stack-address: 0x55b8761a29b0, xcube returned -1 error: No such
file or directory [No such file or directory]

I have attached a simple gfapi reproducer. We need to see why those calls are
failing on .snaps directory.

--- Additional comment from Anoop C S on 2017-04-18 07:46 EDT ---



--- Additional comment from Mohammed Rafi KC on 2017-04-20 08:01:18 EDT ---

RCA:

Currently, snapview server is not handling the dentry name "." and ".." . So it
fails for dentries pointing to entry_point ie, .snaps or the snapshot names.

Comment 0 is private: false

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list