[Bugs] [Bug 1422375] New: glusterd crashes when a volume is stopped

bugzilla at redhat.com bugzilla at redhat.com
Wed Feb 15 07:07:01 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1422375

            Bug ID: 1422375
           Summary: glusterd crashes when a volume is stopped
           Product: GlusterFS
           Version: 3.10
         Component: glusterd
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: kramdoss at redhat.com
                CC: bugs at gluster.org



Description of problem:
stopping a dist-rep gluster volume crashes glusterd on the node from which
'gluster vol stop' command is run. This behavior is seen with containerized
gluster, haven't tried on a standalone gluster cluster though. 

[root at dhcp47-31 /]# gdb -f /usr/sbin/glusterd /var/log/glusterfs/core.157
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/sbin/glusterfsd...Reading symbols from
/usr/lib/debug/usr/sbin/glusterfsd.debug...done.
done.

warning: core file may not match specified executable file.
[New LWP 162]
[New LWP 398]
[New LWP 359]
[New LWP 360]
[New LWP 161]
[New LWP 157]
[New LWP 158]
[New LWP 160]
[New LWP 159]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
INFO'.
Program terminated with signal 6, Aborted.
#0  0x00007fb449a5f1d7 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
bzip2-libs-1.0.6-13.el7.x86_64
device-mapper-event-libs-1.02.135-1.el7_3.2.x86_64
device-mapper-libs-1.02.135-1.el7_3.2.x86_64 elfutils-libelf-0.166-2.el7.x86_64
elfutils-libs-0.166-2.el7.x86_64 glibc-2.17-157.el7_3.1.x86_64
keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.14.1-27.el7_3.x86_64
libattr-2.4.46-12.el7.x86_64 libblkid-2.23.2-33.el7.x86_64
libcap-2.22-8.el7.x86_64 libcom_err-1.42.9-9.el7.x86_64
libgcc-4.8.5-11.el7.x86_64 libselinux-2.5-6.el7.x86_64
libsepol-2.5-6.el7.x86_64 libuuid-2.23.2-33.el7.x86_64
libxml2-2.9.1-6.el7_2.3.x86_64 lvm2-libs-2.02.166-1.el7_3.2.x86_64
openssl-libs-1.0.1e-60.el7.x86_64 pcre-8.32-15.el7_2.1.x86_64
systemd-libs-219-30.el7_3.6.x86_64 userspace-rcu-0.7.16-1.el7.x86_64
xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) bt
#0  0x00007fb449a5f1d7 in raise () from /lib64/libc.so.6
#1  0x00007fb449a608c8 in abort () from /lib64/libc.so.6
#2  0x00007fb449a9ef07 in __libc_message () from /lib64/libc.so.6
#3  0x00007fb449b39047 in __fortify_fail () from /lib64/libc.so.6
#4  0x00007fb449b37200 in __chk_fail () from /lib64/libc.so.6
#5  0x00007fb449b3691b in __vsnprintf_chk () from /lib64/libc.so.6
#6  0x00007fb449b36838 in __snprintf_chk () from /lib64/libc.so.6
#7  0x00007fb4463cee04 in snprintf (__fmt=0x7fb4464cb818 "%s/run/%s-%s.pid",
__n=4096, 
    __s=0x7fb438408330 "\200\204 at 8\264\177") at /usr/include/bits/stdio2.h:64
#8  glusterd_bricks_select_stop_volume (dict=dict at entry=0x7fb43018fe50,
op_errstr=op_errstr at entry=0x7fb43840a930, 
    selected=selected at entry=0x7fb43840a870) at glusterd-op-sm.c:6182
#9  0x00007fb4463dc916 in glusterd_op_bricks_select
(op=op at entry=GD_OP_STOP_VOLUME, dict=dict at entry=0x7fb43018fe50, 
    op_errstr=op_errstr at entry=0x7fb43840a930,
selected=selected at entry=0x7fb43840a870, 
    rsp_dict=rsp_dict at entry=0x7fb43018d830) at glusterd-op-sm.c:7645
#10 0x00007fb4464792af in gd_brick_op_phase (op=GD_OP_STOP_VOLUME,
op_ctx=op_ctx at entry=0x7fb43c001450, 
    req_dict=0x7fb43018fe50, op_errstr=op_errstr at entry=0x7fb43840a930) at
glusterd-syncop.c:1685
#11 0x00007fb446479d33 in gd_sync_task_begin
(op_ctx=op_ctx at entry=0x7fb43c001450, req=req at entry=0x7fb438001710)
    at glusterd-syncop.c:1937
#12 0x00007fb44647a030 in glusterd_op_begin_synctask
(req=req at entry=0x7fb438001710, op=op at entry=GD_OP_STOP_VOLUME, 
    dict=0x7fb43c001450) at glusterd-syncop.c:2006
#13 0x00007fb44646147f in __glusterd_handle_cli_stop_volume
(req=req at entry=0x7fb438001710)
    at glusterd-volume-ops.c:628
#14 0x00007fb4463c0fde in glusterd_big_locked_handler (req=0x7fb438001710, 
    actor_fn=0x7fb446461280 <__glusterd_handle_cli_stop_volume>) at
glusterd-handler.c:81
#15 0x00007fb44b3b26d0 in synctask_wrap (old_task=<optimized out>) at
syncop.c:375
#16 0x00007fb449a70cf0 in ?? () from /lib64/libc.so.6
#17 0x0000000000000000 in ?? ()


Version-Release number of selected component (if applicable):
rpm -qa | grep 'gluster'
glusterfs-resource-agents-3.10.0rc-0.0.el7.centos.noarch
glusterfs-events-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-debuginfo-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-api-3.10.0rc-0.0.el7.centos.x86_64
python2-gluster-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-fuse-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-server-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-devel-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-api-devel-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-geo-replication-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-libs-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-client-xlators-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-extra-xlators-3.10.0rc-0.0.el7.centos.x86_64
glusterfs-cli-3.10.0rc-0.0.el7.centos.x86_64


How reproducible:
Always

Steps to Reproduce:
1. create a 3 node containerized gluster cluster.
2. create a 2x3 volume
3. start the volume
4. stop the volume

Actual results:
glusterd crashes on the node from which 'gluster vol stop' is run

Expected results:
gluster vol should stop and no crashes should be seen

Additional info:
sosreports shall be attached shortly.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list