[Bugs] [Bug 1260003] New: Data Tiering:Regression:NFS crashed due to dht readdirp after attach tier
bugzilla at redhat.com
bugzilla at redhat.com
Fri Sep 4 06:59:51 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1260003
Bug ID: 1260003
Summary: Data Tiering:Regression:NFS crashed due to dht
readdirp after attach tier
Product: GlusterFS
Version: 3.7.4
Component: tiering
Assignee: bugs at gluster.org
Reporter: nchilaka at redhat.com
QA Contact: bugs at gluster.org
CC: bugs at gluster.org
Description of problem:
===========================
I had an existing volume which was mounted on nfs.
While i was doing IOs(creating files), I attached a tier to see if IOs are
going to hot tier post attach. But this didn't happen and hence raised a bug#
But after tier attach completed, I created more files to see if these new set
of files atleast will go to hot tier, but this too didn't.
So wanted to see if doing a lookup will make writes go to hot tier, hence I
opened a duplicate connection to the mount point and issued an "ls"
This crashed the nfs process.
[2015-09-04 11:10:00.830391] E [nfs3.c:341:__nfs3_get_volume_id]
(-->/usr/lib64/glusterfs/3.7.4/xlator/nfs/server.so(nfs3_getattr_reply+0x29)
[0x7f498efaa9e9]
-->/usr/lib64/glusterfs/3.7.4/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x78)
[0x7f498efa93c8]
-->/usr/lib64/glusterfs/3.7.4/xlator/nfs/server.so(__nfs3_get_volume_id+0xae)
[0x7f498efa930e] ) 0-nfs-nfsv3: invalid argument: xl [Invalid argument]
pending frames:
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash:
2015-09-04 11:10:42
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.7.4
/usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb6)[0x33db025936]
/usr/lib64/libglusterfs.so.0(gf_print_trace+0x32f)[0x33db04549f]
/lib64/libc.so.6[0x340e8326a0]
/usr/lib64/glusterfs/3.7.4/xlator/cluster/distribute.so(dht_layout_search+0x19)[0x7f498f893419]
/usr/lib64/glusterfs/3.7.4/xlator/cluster/distribute.so(dht_readdirp_cbk+0x4b1)[0x7f498f8c16f1]
/usr/lib64/glusterfs/3.7.4/xlator/protocol/client.so(client3_3_readdirp_cbk+0x1a0)[0x7f498fb11830]
/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x33db80f4a5]
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1a1)[0x33db8109d1]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x33db80bb28]
/usr/lib64/glusterfs/3.7.4/rpc-transport/socket.so(+0xabd5)[0x7f4990958bd5]
/usr/lib64/glusterfs/3.7.4/rpc-transport/socket.so(+0xc7bd)[0x7f499095a7bd]
/usr/lib64/libglusterfs.so.0[0x33db08b0a0]
/lib64/libpthread.so.0[0x340ec07a51]
/lib64/libc.so.6(clone+0x6d)[0x340e8e89ad]
Version-Release number of selected component (if applicable):
============================================================
[root at nag-manual-node1 glusterfs]# gluster --version
glusterfs 3.7.4 built on Sep 2 2015 18:06:07
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.
[root at nag-manual-node1 glusterfs]# rpm -qa|grep gluster
glusterfs-libs-3.7.4-0.16.git9f27ef9.el6.x86_64
glusterfs-api-3.7.4-0.16.git9f27ef9.el6.x86_64
glusterfs-client-xlators-3.7.4-0.16.git9f27ef9.el6.x86_64
glusterfs-fuse-3.7.4-0.16.git9f27ef9.el6.x86_64
glusterfs-cli-3.7.4-0.16.git9f27ef9.el6.x86_64
glusterfs-3.7.4-0.16.git9f27ef9.el6.x86_64
glusterfs-server-3.7.4-0.16.git9f27ef9.el6.x86_64
Steps to Reproduce:
=====================
1.create a regular volume
2.now mount vol and do IOs and while IOs are going on attach a tier
3.now after attach tier is complete, open another connection to the same client
and issue an ls
4. this caused the crash
I will be failing the on_qa bug 1259081 - I/O failure on attaching tier, due to
this
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list