[Bugs] [Bug 1212063] New: [Geo-replication] cli crashed and core dump was observed while running gluster volume geo-replication vol0 status command
bugzilla at redhat.com
bugzilla at redhat.com
Wed Apr 15 13:32:43 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1212063
Bug ID: 1212063
Summary: [Geo-replication] cli crashed and core dump was
observed while running gluster volume geo-replication
vol0 status command
Product: GlusterFS
Version: mainline
Component: geo-replication
Severity: urgent
Assignee: bugs at gluster.org
Reporter: ashah at redhat.com
CC: bugs at gluster.org, gluster-bugs at redhat.com
Description of problem:
While running "gluster volume geo-replication vol0 status" command cli crashed
and core dump was observed.
Version-Release number of selected component (if applicable):
[root at localhost core]# rpm -qa | grep glusterfs
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
samba-glusterfs-3.6.509-169.4.el6rhs.x86_64
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64
How reproducible:
1/1
Steps to Reproduce:
gluster volume geo-replication vol0 status crashed
Actual results:
gluster cli crashed
Expected results:
Crash should not be observed
Additional info:
Core was generated by `gluster volume geo-replication vol0 status'.
Program terminated with signal 11, Segmentation fault.
#0 strtail (str=0x0, pattern=0x44314f "co") at common-utils.c:1913
1913 for (i = 0; str[i] == pattern[i] && str[i]; i++);
Missing separate debuginfos, use: debuginfo-install
glibc-2.12-1.149.el6_6.4.x86_64 libuuid-2.17.2-12.18.el6.x86_64
libxml2-2.7.6-17.el6_6.1.x86_64 ncurses-libs-5.7-3.20090208.el6.x86_64
openssl-1.0.1e-30.el6_6.4.x86_64 readline-6.0-4.el6.x86_64
zlib-1.2.3-29.el6.x86_64
(gdb) bt
#0 strtail (str=0x0, pattern=0x44314f "co") at common-utils.c:1913
#1 0x000000000040a8bf in parse_cmdline (argc=<value optimized out>,
argv=<value optimized out>, state=0x7fff22945ac0) at cli.c:415
#2 0x000000000040abb0 in main (argc=5, argv=0x7fff22945cb8) at cli.c:707
(gdb)
#0 strtail (str=0x0, pattern=0x44314f "co") at common-utils.c:1913
#1 0x000000000040a8bf in parse_cmdline (argc=<value optimized out>,
argv=<value optimized out>, state=0x7fff22945ac0) at cli.c:415
#2 0x000000000040abb0 in main (argc=5, argv=0x7fff22945cb8) at cli.c:707
(gdb) p str
$1 = 0x0
(gdb) q
================================================================================
[root at localhost core]# gluster v status vol0
Status of volume: vol0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.47.143:/rhs/brick1/b1 49152 0 Y 1764
Brick 10.70.47.145:/rhs/brick1/b2 49152 0 Y 1977
Brick 10.70.47.150:/rhs/brick1/b3 49152 0 Y 2663
Brick 10.70.47.151:/rhs/brick1/b4 49152 0 Y 2596
Brick 10.70.47.143:/rhs/brick2/b5 49153 0 Y 1765
Brick 10.70.47.145:/rhs/brick2/b6 49153 0 Y 1988
Brick 10.70.47.150:/rhs/brick2/b7 49153 0 Y 2680
Brick 10.70.47.151:/rhs/brick2/b8 49153 0 Y 2613
Brick 10.70.47.143:/rhs/brick3/b9 49154 0 Y 1781
Brick 10.70.47.145:/rhs/brick3/10 49154 0 Y 1994
Brick 10.70.47.150:/rhs/brick3/b11 49154 0 Y 2697
Brick 10.70.47.151:/rhs/brick3/b12 49154 0 Y 2630
Snapshot Daemon on localhost 49156 0 Y 1793
NFS Server on localhost 2049 0 Y 1738
Self-heal Daemon on localhost N/A N/A Y 1749
Quota Daemon on localhost N/A N/A N N/A
Snapshot Daemon on 10.70.47.150 49155 0 Y 8443
NFS Server on 10.70.47.150 2049 0 Y 8451
Self-heal Daemon on 10.70.47.150 N/A N/A Y 2969
Quota Daemon on 10.70.47.150 N/A N/A Y 8402
Snapshot Daemon on 10.70.47.151 49155 0 Y 8223
NFS Server on 10.70.47.151 2049 0 Y 8239
Self-heal Daemon on 10.70.47.151 N/A N/A Y 2717
Quota Daemon on 10.70.47.151 N/A N/A Y 8184
Snapshot Daemon on 10.70.47.145 49156 0 Y 2000
NFS Server on 10.70.47.145 2049 0 Y 1952
Self-heal Daemon on 10.70.47.145 N/A N/A Y 1965
Quota Daemon on 10.70.47.145 N/A N/A N N/A
Task Status of Volume vol0
------------------------------------------------------------------------------
There are no active volume tasks
========================================================
[root at localhost core]# gluster v info vol0
Volume Name: vol0
Type: Distributed-Replicate
Volume ID: fc0f1280-821d-4990-a05a-00ccc9474b44
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.47.143:/rhs/brick1/b1
Brick2: 10.70.47.145:/rhs/brick1/b2
Brick3: 10.70.47.150:/rhs/brick1/b3
Brick4: 10.70.47.151:/rhs/brick1/b4
Brick5: 10.70.47.143:/rhs/brick2/b5
Brick6: 10.70.47.145:/rhs/brick2/b6
Brick7: 10.70.47.150:/rhs/brick2/b7
Brick8: 10.70.47.151:/rhs/brick2/b8
Brick9: 10.70.47.143:/rhs/brick3/b9
Brick10: 10.70.47.145:/rhs/brick3/10
Brick11: 10.70.47.150:/rhs/brick3/b11
Brick12: 10.70.47.151:/rhs/brick3/b12
Options Reconfigured:
features.barrier: disable
features.quota: on
features.quota-deem-statfs: on
features.uss: enable
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list