[Bugs] [Bug 1230525] New: glusterd: glusterd crashing if you run re-balance and vol status command parallely.

bugzilla at redhat.com bugzilla at redhat.com
Thu Jun 11 06:47:18 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1230525

            Bug ID: 1230525
           Summary: glusterd: glusterd crashing if you run  re-balance and
                    vol status  command parallely.
           Product: Red Hat Gluster Storage
           Version: 3.1
         Component: glusterfs-server
          Keywords: Triaged
          Severity: medium
          Priority: medium
          Assignee: rhs-bugs at redhat.com
          Reporter: anekkunt at redhat.com
        QA Contact: storage-qa-internal at redhat.com
                CC: amukherj at redhat.com, bugs at gluster.org,
                    gluster-bugs at redhat.com, nlevinki at redhat.com,
                    vbellur at redhat.com
        Depends On: 1229139
            Blocks: 1230523



+++ This bug was initially created as a clone of Bug #1229139 +++

Description of problem:
glusterd: glusterd crashing if you run  re-balance and vol status  command
parallely (compilied in debug mode).


Version-Release number of selected component (if applicable):


How reproducible:
Most of the times


Steps to Reproduce:
1.compile in glusterfs  debug mode (./configure --enable-debug)
2.gluster peer probe 46.101.184.191

gluster volume create livebackup replica 2 transport tcp
46.101.160.245:/opt/gluster_brick1 46.101.184.191:/opt/gluster_brick2 force
gluster volume start livebackup
gluster volume add-brick livebackup 46.101.160.245:/opt/gluster_brick2
46.101.184.191:/opt/gluster_brick1 force

gluster volume info

Volume Name: livebackup
Type: Distributed-Replicate
Volume ID: 55cf62a0-099f-4a5e-ae4a-0ddec29239b4
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 46.101.160.245:/opt/gluster_brick1
Brick2: 46.101.184.191:/opt/gluster_brick2
Brick3: 46.101.160.245:/opt/gluster_brick2
Brick4: 46.101.184.191:/opt/gluster_brick1
Options Reconfigured:
performance.readdir-ahead: on

mount -t glusterfs localhost:/livebackup /mnt

cp /var/log/* /mnt

gluster volume rebalance livebackup  start

In node 2:
gluster volume status

Actual results:
glusterd crashing.
Expected results:
glusterd should not crash.



(gdb) bt
#0  0x0000003c000348c7 in raise () from /lib64/libc.so.6
#1  0x0000003c0003652a in abort () from /lib64/libc.so.6
#2  0x0000003c0002d46d in __assert_fail_base () from /lib64/libc.so.6
#3  0x0000003c0002d522 in __assert_fail () from /lib64/libc.so.6
#4  0x00007fc09938d0d5 in glusterd_volume_rebalance_use_rsp_dict (aggr=0x0,
rsp_dict=0x7fc08800b68c)
    at glusterd-utils.c:7776
#5  0x00007fc0993969b4 in __glusterd_commit_op_cbk (req=0x7fc08800f1cc,
iov=0x7fc08800f20c, count=1, 
    myframe=0x7fc08800f0b4) at glusterd-rpc-ops.c:1333
#6  0x00007fc099393cee in glusterd_big_locked_cbk (req=0x7fc08800f1cc,
iov=0x7fc08800f20c, count=1, 
    myframe=0x7fc08800f0b4, fn=0x7fc099396419 <__glusterd_commit_op_cbk>) at
glusterd-rpc-ops.c:207
#7  0x00007fc099396a9a in glusterd_commit_op_cbk (req=0x7fc08800f1cc,
iov=0x7fc08800f20c, count=1, 
    myframe=0x7fc08800f0b4) at glusterd-rpc-ops.c:1371
#8  0x00007fc0a2ebdc1b in rpc_clnt_handle_reply (clnt=0xaf58b0,
pollin=0x7fc08800a7a0) at rpc-clnt.c:761
#9  0x00007fc0a2ebe010 in rpc_clnt_notify (trans=0xaf5d20, mydata=0xaf58e0,
event=RPC_TRANSPORT_MSG_RECEIVED, 
    data=0x7fc08800a7a0) at rpc-clnt.c:889
#10 0x00007fc0a2eba69a in rpc_transport_notify (this=0xaf5d20,
event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fc08800a7a0)
    at rpc-transport.c:538
#11 0x00007fc097df912c in socket_event_poll_in (this=0xaf5d20) at socket.c:2285
#12 0x00007fc097df95d8 in socket_event_handler (fd=12, idx=2, data=0xaf5d20,
poll_in=1, poll_out=0, poll_err=0)
    at socket.c:2398
#13 0x00007fc0a3168146 in event_dispatch_epoll_handler (event_pool=0xa77ca0,
event=0x7fc096dbcea0)
    at event-epoll.c:567
#14 0x00007fc0a3168499 in event_dispatch_epoll_worker (data=0xa82140) at
event-epoll.c:669
#15 0x0000003c0040752a in start_thread () from /lib64/libpthread.so.0
#16 0x0000003c0010079d in clone () from /lib64/libc.so.6

--- Additional comment from Anand Avati on 2015-06-08 05:08:05 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based
on trans_id in op_sm call backs.) posted (#1) for review on master by Anand
Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-06-08 05:09:58 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based
on trans_id in op_sm call backs.) posted (#2) for review on master by Anand
Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-06-08 05:11:28 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based
on trans_id in op_sm call backs.) posted (#3) for review on master by Anand
Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-06-08 09:41:32 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based
on trans_id in op_sm call backs.) posted (#4) for review on master by Anand
Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-06-09 08:46:15 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based
on trans_id in op_sm call backs.) posted (#5) for review on master by Anand
Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-06-10 15:05:39 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based
on trans_id in op_sm call backs.) posted (#7) for review on master by Anand
Nekkunti (anekkunt at redhat.com)


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1229139
[Bug 1229139] glusterd: glusterd crashing if you run  re-balance and vol
status  command parallely.
https://bugzilla.redhat.com/show_bug.cgi?id=1230523
[Bug 1230523] glusterd: glusterd crashing if you run  re-balance and vol
status  command parallely.
-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=jGjPjzicyY&a=cc_unsubscribe


More information about the Bugs mailing list