[Gluster-devel] gluster client crash

Anand Avati avati at zresearch.com
Mon Dec 17 18:01:43 UTC 2007


Karl,
 is it possible for us to login and inspect the core with gdb? just having
the core alone is not sufficient (need the binary environemtn). your help
will be very much appreciated.

avati

2007/12/17, Karl Bernard <karl at vomba.com>:
>
>  Here's the backtrace:
>
> First Crash:
>
> Program terminated with signal 11, Segmentation fault.
> #0  0x00000000 in ?? ()
> (gdb) bt
> #0  0x00000000 in ?? ()
> #1  0x0032cc0c in ra_close_cbk (frame=0x83c97e4, cookie=0xb4b0c9f0,
> this=0x0, op_ret=0, op_errno=0) at read-ahead.c:256
> #2  0x00196c1d in wb_ffr_cbk (frame=0xb4b0c9f0, cookie=0xb4b0ca28,
> this=0x838e6d0, op_ret=0, op_errno=0) at write-behind.c:693
> #3  0x00262dab in iot_close_cbk (frame=0xb4b0ca28, cookie=0xb4ea8788,
> this=0x838e688, op_ret=0, op_errno=0) at io-threads.c:174
> #4  0x00118b46 in afr_close_cbk (frame=0xb4ea8788, cookie=0x9c5c470,
> this=0x838e2c0, op_ret=0, op_errno=0) at afr.c:3221
> #5  0x0013367d in client_close_cbk (frame=0x9c5c470, args=0x9c5ba60) at
> client-protocol.c:3436
> #6  0x001374c4 in notify (this=0x838dbb8, event=2, data=0x83c7c68) at
> client-protocol.c:4568
> #7  0x006c9717 in transport_notify (this=0x83c97e4, event=138188772) at
> transport.c:154
> #8  0x006ca473 in sys_epoll_iteration (ctx=0xbfebc014) at epoll.c:54
> #9  0x006c984c in poll_iteration (ctx=0xbfebc014) at transport.c:302
> #10 0x0804a494 in main (argc=8, argv=0xbfebc114) at glusterfs.c:400
>
> From the log:
> ---------
> got signal (11), printing backtrace
> ---------
> [0xcc5420]
> /usr/local/lib/glusterfs/1.3.8/xlator/performance/write- behind.so
> [0x196c1d]
> /usr/local/lib/glusterfs/1.3.8/xlator/performance/io-threads.so[0x262dab]
>
> /usr/local/lib/glusterfs/1.3.8/xlator/cluster/afr.so(afr_close_cbk+0x1d6)[0x118b46]
> /usr/local/lib/glusterfs/1.3.8/xlator/protocol/client.so[0x13367d]
>
> /usr/local/lib/glusterfs/1.3.8/xlator/protocol/client.so(notify+0xa84)[0x1374c4]
> /usr/local/lib/libglusterfs.so.0(transport_notify+0x37)[0x6c9717]
> /usr/local/lib/libglusterfs.so.0(sys_epoll_iteration+0xf3)[0x6ca473]
> /usr/local/lib/libglusterfs.so.0(poll_iteration+0x7c)[0x6c984c]
> [glusterfs](main+0x424)[0x804a494]
> /lib/libc.so.6(__libc_start_main+0xdc)[0xa49dec]
> [glusterfs][0x8049fe1]
>
>
> ------------------------------------
> Second crash:
> #0  0x0000003ff806ea75 in _int_malloc () from /lib64/libc.so.6
> #1  0x0000003ff80706cd in malloc () from /lib64/libc.so.6
> #2  0x00002aaaaaacbfe2 in gf_block_unserialize_transport
> (trans=0x15f1eed0, max_block_size=268435456) at protocol.c:344
> #3  0x00002aaaaaf00544 in notify (this=0x15f1bb30, event=<value optimized
> out>, data=0x15f1eed0) at client-protocol.c:4877
> #4  0x00002aaaaaaccdf5 in sys_epoll_iteration (ctx=0x7fff26e3a940) at
> epoll.c:54
> #5  0x00002aaaaaacc305 in poll_iteration (ctx=0x7fff26e3a940) at
> transport.c:302
> #6  0x000000000040307d in main (argc=8, argv=0x7fff26e3aae8) at
> glusterfs.c:400
>
> From the log:
> ---------
> got signal (11), printing backtrace
> ---------
> /lib64/libc.so.6[0x3ff8030070]
> /lib64/libc.so.6[0x3ff806ea75]
> /lib64/libc.so.6(__libc_malloc+0x7d)[0x3ff80706cd]
>
> /usr/local/lib/libglusterfs.so.0(gf_block_unserialize_transport+0x3d2)[0x2aaaaaacbfe2]
>
> /usr/local/lib/glusterfs/1.3.8/xlator/protocol/client.so(notify+0x244)[0x2aaaaaf00544]
> /usr/local/lib/libglusterfs.so.0(sys_epoll_iteration+0xd5)[0x2aaaaaaccdf5]
> /usr/local/lib/libglusterfs.so.0(poll_iteration+0x75)[0x2aaaaaacc305]
> [glusterfs](main+0x38d)[0x40307d]
> /lib64/libc.so.6(__libc_start_main+0xf4)[0x3ff801d8a4]
> [glusterfs][0x402c59]
>
>
> >
> > Anand Avati wrote:
> >
> > Karl,
> >  can  you get a backtrace from the coredump with gdb please? that would
> > help a lot.
> >
> > avati
> >
> > 2007/12/17, Karl Bernard <karl at vomba.com >:
> > >
> > >
> > > The client crashed, if this can be helpful:
> > >
> > > 2007-12-15 17:03:59 W [client-protocol.c :289:client_protocol_xfer]
> > > sxx01: attempting to pipeline request type(0) op(34) with handshake
> > >
> > > ---------
> > > got signal (11), printing backtrace
> > > ---------
> > > [0xcc5420]
> > > /usr/local/lib/glusterfs/1.3.8/xlator/performance/write- behind.so
> > > [0x196c1d]
> > > /usr/local/lib/glusterfs/1.3.8/xlator/performance/io-threads.so
> > > [0x262dab]
> > >
> > > /usr/local/lib/glusterfs/1.3.8/xlator/cluster/afr.so(afr_close_cbk+0x1d6)[0x118b46]
> > > /usr/local/lib/glusterfs/1.3.8/xlator/protocol/client.so[0x13367d]
> > >
> > > /usr/local/lib/glusterfs/1.3.8/xlator/protocol/client.so(notify+0xa84)[0x1374c4]
> > > /usr/local/lib/libglusterfs.so.0(transport_notify+0x37)[0x6c9717]
> > > /usr/local/lib/libglusterfs.so.0(sys_epoll_iteration+0xf3)[0x6ca473]
> > > /usr/local/lib/libglusterfs.so.0(poll_iteration+0x7c)[0x6c984c]
> > > [glusterfs](main+0x424)[0x804a494]
> > > /lib/libc.so.6(__libc_start_main+0xdc)[0xa49dec]
> > > [glusterfs][0x8049fe1]
> > >
> > >
> > > glusterfs 1.3.8
> > > installed from tla, last patch:
> > > 2007-12-03 22:29:15 GMT Anand V. Avati <avati at 80x25.org
> > > >        patch-594
> > >
> > > Config client:
> > > ----------------------------------------------------------
> > > volume sxx01
> > > type protocol/client
> > > option transport-type tcp/client
> > > option remote-host sxx01b
> > > option remote-subvolume brick
> > > end-volume
> > >
> > > volume sxx02
> > > type protocol/client
> > > option transport-type tcp/client
> > > option remote-host sxx02b
> > > option remote-subvolume brick
> > > end-volume
> > >
> > > volume afr1-2
> > >   type cluster/afr
> > >   subvolumes sxx01 sxx02
> > > end-volume
> > >
> > > volume iot
> > > type performance/io-threads
> > > subvolumes afr1-2
> > > option thread-count 8
> > > end-volume
> > >
> > > ## Add writebehind feature
> > > volume writebehind
> > >   type performance/write-behind
> > >   option aggregate-size 128kB
> > >   subvolumes iot
> > > end-volume
> > >
> > > ## Add readahead feature
> > > volume readahead
> > >   type performance/read-ahead
> > >   option page-size 256kB     #
> > >   option page-count 16       # cache per file  = (page-count x
> > > page-size)
> > >   subvolumes writebehind
> > > end-volume
> > >
> > > ------------------------------------------------------
> > >
> > > Config Server:
> > > volume brick-posix
> > >         type storage/posix
> > >         option directory /data/glusterfs/dataspace
> > > end-volume
> > >
> > > volume brick-ns
> > >         type storage/posix
> > >         option directory /data/glusterfs/namespace
> > > end-volume
> > >
> > > volume brick
> > >   type performance/io-threads
> > >   option thread-count 2
> > >   option cache-size 32MB
> > >   subvolumes brick-posix
> > > end-volume
> > >
> > > volume server
> > >         type protocol/server
> > >         option transport-type tcp/server
> > >         subvolumes brick brick-ns
> > >         option auth.ip.brick.allow 172.16.93.*
> > >         option auth.ip.brick-ns.allow 172.16.93.*
> > > end-volume
> > >
> > > ------------------------------------
> > >
> > > The client was most likely  checking for the existence of a file or
> > > writing a new file to the servers.
> > >
> >
> >
> >
> > --
> > If I traveled to the end of the rainbow
> > As Dame Fortune did intend,
> > Murphy would be there to tell me
> > The pot's at the other end.
> >
> >
> >
>
>
> --
> If I traveled to the end of the rainbow
> As Dame Fortune did intend,
> Murphy would be there to tell me
> The pot's at the other end.
>
>
>


-- 
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.



More information about the Gluster-devel mailing list