[Bugs] [Bug 1227204] New: glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3
bugzilla at redhat.com
bugzilla at redhat.com
Tue Jun 2 07:22:37 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1227204
Bug ID: 1227204
Summary: glusterfsd: bricks crash while executing ls on
nfs-ganesha vers=3
Product: GlusterFS
Version: mainline
Component: upcall
Keywords: Triaged
Severity: high
Assignee: bugs at gluster.org
Reporter: skoduri at redhat.com
CC: ansubram at redhat.com, bugs at gluster.org,
gluster-bugs at redhat.com, kkeithle at redhat.com,
mmadhusu at redhat.com, ndevos at redhat.com,
saujain at redhat.com, skoduri at redhat.com
Depends On: 1221941
+++ This bug was initially created as a clone of Bug #1221941 +++
Description of problem:
Seen a coredump for several brick processes of the same volume, while executing
the ls on mount-point. Volume mount using nfs-ganesha with vers=3
Version-Release number of selected component (if applicable):
glusterfs-3.7.0beta2-0.0.el6.x86_64
nfs-ganesha-2.2.0-0.el6.x86_64
How reproducible:
seen only once
Steps to Reproduce:
1. create a 6x2 volume, start it
2. bring up nfs-ganesha after completing the pre-requisites
3. disable_acl and do the needful as required to bringing up ganesha again
4. mount the volume with vers=3
5. execute ls on the mount-point
Actual results:
step 5 result,
[root at rhsauto010 ~]# time ls /mnt/nfs-test
dir dir1 fstest_f017b1f6b87412d79e9052d0a289ce23 rhsauto010.test
real 144m12.193s
user 0m0.003s
sys 0m0.023s
(gdb) bt
#0 0x00007fcb200605bd in __gf_free (free_ptr=0x7fcabc0036a0) at mem-pool.c:312
#1 0x00007fcb0fbe1dc7 in upcall_reaper_thread (data=0x7fcb100127a0) at
upcall-internal.c:426
#2 0x0000003890c079d1 in start_thread () from /lib64/libpthread.so.0
#3 0x00000038908e88fd in clone () from /lib64/libc.so.6
Expected results:
ls should not this long time and glusterfsd getting a coredump is wierd, need
to rectify this problem
Additional info:
--- Additional comment from Saurabh on 2015-05-15 06:03:35 EDT ---
[root at nfs3 ~]# gluster volume status
Status of volume: gluster_shared_storage
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.37.148:/rhs/brick1/d1r1-share 49156 0 Y 3549
Brick 10.70.37.77:/rhs/brick1/d1r2-share 49155 0 Y 3329
Brick 10.70.37.76:/rhs/brick1/d2r1-share 49155 0 Y 3081
Brick 10.70.37.69:/rhs/brick1/d2r2-share 49155 0 Y 3346
Brick 10.70.37.148:/rhs/brick1/d3r1-share 49157 0 Y 3566
Brick 10.70.37.77:/rhs/brick1/d3r2-share 49156 0 Y 3346
Brick 10.70.37.76:/rhs/brick1/d4r1-share 49156 0 Y 3098
Brick 10.70.37.69:/rhs/brick1/d4r2-share 49156 0 Y 3363
Brick 10.70.37.148:/rhs/brick1/d5r1-share 49158 0 Y 3583
Brick 10.70.37.77:/rhs/brick1/d5r2-share 49157 0 Y 3363
Brick 10.70.37.76:/rhs/brick1/d6r1-share 49157 0 Y 3115
Brick 10.70.37.69:/rhs/brick1/d6r2-share 49157 0 Y 3380
Self-heal Daemon on localhost N/A N/A Y 28389
Self-heal Daemon on 10.70.37.148 N/A N/A Y 22717
Self-heal Daemon on 10.70.37.77 N/A N/A Y 4784
Self-heal Daemon on 10.70.37.76 N/A N/A Y 25893
Task Status of Volume gluster_shared_storage
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: vol2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.37.148:/rhs/brick1/d1r1 49153 0 Y 22219
Brick 10.70.37.77:/rhs/brick1/d1r2 49152 0 Y 4321
Brick 10.70.37.76:/rhs/brick1/d2r1 N/A N/A N 25654
Brick 10.70.37.69:/rhs/brick1/d2r2 49152 0 Y 27914
Brick 10.70.37.148:/rhs/brick1/d3r1 49154 0 Y 18842
Brick 10.70.37.77:/rhs/brick1/d3r2 49153 0 Y 4343
Brick 10.70.37.76:/rhs/brick1/d4r1 N/A N/A N 25856
Brick 10.70.37.69:/rhs/brick1/d4r2 N/A N/A N 27934
Brick 10.70.37.148:/rhs/brick1/d5r1 49155 0 Y 22237
Brick 10.70.37.77:/rhs/brick1/d5r2 49154 0 Y 4361
Brick 10.70.37.76:/rhs/brick1/d6r1 N/A N/A N 25874
Brick 10.70.37.69:/rhs/brick1/d6r2 N/A N/A N 27952
Self-heal Daemon on localhost N/A N/A Y 28389
Self-heal Daemon on 10.70.37.77 N/A N/A Y 4784
Self-heal Daemon on 10.70.37.148 N/A N/A Y 22717
Self-heal Daemon on 10.70.37.76 N/A N/A Y 25893
Task Status of Volume vol2
------------------------------------------------------------------------------
There are no active volume tasks
cat /etc/ganesha/exports/export.vol2.conf
# WARNING : Using Gluster CLI will overwrite manual
# changes made to this file. To avoid it, edit the
# file, copy it over to all the NFS-Ganesha nodes
# and run ganesha-ha.sh --refresh-config.
EXPORT{
Export_Id= 2 ;
Path = "/vol2";
FSAL {
name = GLUSTER;
hostname="localhost";
volume="vol2";
}
Access_type = RW;
Squash="No_root_squash";
Pseudo="/vol2";
Protocols = "3", "4" ;
Transports = "UDP","TCP";
SecType = "sys";
Disable_ACL = True;
}
--- Additional comment from Saurabh on 2015-05-15 06:08:11 EDT ---
--- Additional comment from Saurabh on 2015-05-15 06:10:39 EDT ---
--- Additional comment from Saurabh on 2015-05-15 06:13:06 EDT ---
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1221941
[Bug 1221941] glusterfsd: bricks crash while executing ls on nfs-ganesha
vers=3
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list