[Bugs] [Bug 1219894] [georep]: Creating geo-rep session kills all the brick process
bugzilla at redhat.com
bugzilla at redhat.com
Fri May 8 15:41:38 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1219894
Kotresh HR <khiremat at redhat.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |khiremat at redhat.com
QA Contact| |rhinduja at redhat.com
--- Comment #1 from Kotresh HR <khiremat at redhat.com> ---
+++ This bug was initially created as a clone of Bug #1219823 +++
Description of problem:
=======================
With the latest nightly "glusterfs-3.7.0beta1-0.69.git1a32479.el6.x86_64" ,
when the geo-rep session is created, all the bricks are crashed with the
following logs:
pending frames:
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 6
time of crash:
2015-05-08 11:15:04
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.7.0beta1
/usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb6)[0x7f693cbc4576]
/usr/lib64/libglusterfs.so.0(gf_print_trace+0x33f)[0x7f693cbe2eaf]
/lib64/libc.so.6[0x3683a326a0]
/lib64/libc.so.6(gsignal+0x35)[0x3683a32625]
/lib64/libc.so.6(abort+0x175)[0x3683a33e05]
/lib64/libc.so.6[0x3683a70537]
/lib64/libc.so.6(__fortify_fail+0x37)[0x3683b02697]
/lib64/libc.so.6[0x3683b00580]
/lib64/libc.so.6[0x3683affc7b]
/lib64/libc.so.6(__snprintf_chk+0x7a)[0x3683affb4a]
/usr/lib64/glusterfs/3.7.0beta1/xlator/features/changelog.so(htime_create+0x16d)[0x7f6931133ecd]
/usr/lib64/glusterfs/3.7.0beta1/xlator/features/changelog.so(reconfigure+0x486)[0x7f693112aa46]
/usr/lib64/libglusterfs.so.0(+0x75cfa)[0x7f693cc15cfa]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(+0x75c8c)[0x7f693cc15c8c]
/usr/lib64/libglusterfs.so.0(glusterfs_volfile_reconfigure+0x1a2)[0x7f693cc024a2]
/usr/sbin/glusterfsd(mgmt_getspec_cbk+0x2f3)[0x40d0b3]
/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x7f693c994d75]
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x142)[0x7f693c996212]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x7f693c9918e8]
/usr/lib64/glusterfs/3.7.0beta1/rpc-transport/socket.so(+0x9bcd)[0x7f69329d1bcd]
/usr/lib64/glusterfs/3.7.0beta1/rpc-transport/socket.so(+0xb6fd)[0x7f69329d36fd]
/usr/lib64/libglusterfs.so.0(+0x807c0)[0x7f693cc207c0]
/lib64/libpthread.so.0[0x3683e079d1]
/lib64/libc.so.6(clone+0x6d)[0x3683ae89dd]
---------
No core is found.
Version-Release number of selected component (if applicable):
=============================================================
How reproducible:
=================
always
Steps to Reproduce:
===================
[root at georep1 ~]# gluster peer probe 10.70.46.97
peer probe: success.
[root at georep1 ~]# sleep 2
[root at georep1 ~]# gluster peer probe 10.70.46.93
peer probe: success.
[root at georep1 ~]#
[root at georep1 ~]#
[root at georep1 ~]# # Master volume creation
[root at georep1 ~]#
[root at georep1 ~]# gluster volume create master replica 3
10.70.46.96:/rhs/brick1/b1 10.70.46.97:/rhs/brick1/b1
10.70.46.93:/rhs/brick1/b1 10.70.46.96:/rhs/brick2/b2
10.70.46.97:/rhs/brick2/b2 10.70.46.93:/rhs/brick2/b2
volume create: master: success: please start the volume to access data
[root at georep1 ~]#
[root at georep1 ~]# gluster volume start master
volume start: master: success
[root at georep1 ~]# gluster system:: execute gsec_create
Common secret pub file present at
/var/lib/glusterd/geo-replication/common_secret.pem.pub
[root at georep1 ~]# gluster v status
Status of volume: master
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.46.96:/rhs/brick1/b1 49152 0 Y 24030
Brick 10.70.46.97:/rhs/brick1/b1 49152 0 Y 9656
Brick 10.70.46.93:/rhs/brick1/b1 49152 0 Y 11957
Brick 10.70.46.96:/rhs/brick2/b2 49153 0 Y 24047
Brick 10.70.46.97:/rhs/brick2/b2 49153 0 Y 9673
Brick 10.70.46.93:/rhs/brick2/b2 49153 0 Y 11974
NFS Server on localhost 2049 0 Y 24067
Self-heal Daemon on localhost N/A N/A Y 24074
NFS Server on 10.70.46.97 2049 0 Y 9692
Self-heal Daemon on 10.70.46.97 N/A N/A Y 9701
NFS Server on 10.70.46.93 2049 0 Y 11994
Self-heal Daemon on 10.70.46.93 N/A N/A Y 12001
Task Status of Volume master
------------------------------------------------------------------------------
There are no active volume tasks
[root at georep1 ~]# gluster volume geo-replication master 10.70.46.154::slave
create push-pem
Creating geo-replication session between master & 10.70.46.154::slave has been
successful
[root at georep1 ~]# gluster v status
Status of volume: master
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.46.96:/rhs/brick1/b1 N/A N/A N 24030
Brick 10.70.46.97:/rhs/brick1/b1 49152 0 Y 9656
Brick 10.70.46.93:/rhs/brick1/b1 49152 0 Y 11957
Brick 10.70.46.96:/rhs/brick2/b2 N/A N/A N 24047
Brick 10.70.46.97:/rhs/brick2/b2 N/A N/A N 9673
Brick 10.70.46.93:/rhs/brick2/b2 49153 0 Y 11974
NFS Server on localhost 2049 0 Y 24401
Self-heal Daemon on localhost N/A N/A Y 24412
NFS Server on 10.70.46.93 2049 0 Y 12205
Self-heal Daemon on 10.70.46.93 N/A N/A Y 12214
NFS Server on 10.70.46.97 2049 0 Y 9904
Self-heal Daemon on 10.70.46.97 N/A N/A Y 9912
Task Status of Volume master
------------------------------------------------------------------------------
There are no active volume tasks
[root at georep1 ~]# gluster v status
Status of volume: master
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.46.96:/rhs/brick1/b1 N/A N/A N 24030
Brick 10.70.46.97:/rhs/brick1/b1 N/A N/A N 9656
Brick 10.70.46.93:/rhs/brick1/b1 N/A N/A N 11957
Brick 10.70.46.96:/rhs/brick2/b2 N/A N/A N 24047
Brick 10.70.46.97:/rhs/brick2/b2 N/A N/A N 9673
Brick 10.70.46.93:/rhs/brick2/b2 N/A N/A N 11974
NFS Server on localhost 2049 0 Y 24401
Self-heal Daemon on localhost N/A N/A Y 24412
NFS Server on 10.70.46.93 2049 0 Y 12205
Self-heal Daemon on 10.70.46.93 N/A N/A Y 12214
NFS Server on 10.70.46.97 2049 0 Y 9904
Self-heal Daemon on 10.70.46.97 N/A N/A Y 9912
Task Status of Volume master
------------------------------------------------------------------------------
There are no active volume tasks
[root at georep1 ~]#
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list