[Gluster-devel] glusterfs client segfault in patch-308

Anand Avati avati at zresearch.com
Thu Jul 19 20:22:04 UTC 2007


Rhesa,
  please update to the latest patchset on TLA, some bugs which would lead to
your state is fixed. Please let us know if the fix works for you.

thanks,
avati

2007/7/13, Rhesa Rozendaal <gluster at rhesa.com>:
>
> client log:
>
> 2007-07-12 21:57:43 C [common-utils.c:208:gf_print_trace] debug-backtrace:
> Got
> signal (11), printing backtrace
> 2007-07-12 21:57:43 C [common-utils.c:210:gf_print_trace] debug-backtrace:
> /usr/local/lib/libglusterfs.so.0(gf_print_trace+0x26) [0xd53bca]
> 2007-07-12 21:57:43 C [common-utils.c:210:gf_print_trace] debug-backtrace:
> /lib/tls/libc.so.6 [0x5dd898]
> 2007-07-12 21:57:43 C [common-utils.c:210:gf_print_trace] debug-backtrace:
> /usr/local/lib/libglusterfs.so.0(dict_get+0x11) [0xd4e885]
> 2007-07-12 21:57:43 C [common-utils.c:210:gf_print_trace] debug-backtrace:
> /usr/local/lib/glusterfs/1.3.0-pre5.2
> /xlator/cluster/unify.so(unify_flush+0x4b)
> [0x117a4b]
> 2007-07-12 21:57:43 C [common-utils.c:210:gf_print_trace] debug-backtrace:
> /usr/local/lib/glusterfs/1.3.0-pre5.2/xlator/performance/io-threads.so[0xa1807a]
> 2007-07-12 21:57:43 C [common-utils.c:210:gf_print_trace] debug-backtrace:
> /usr/local/lib/libglusterfs.so.0(call_resume+0x3b8) [0xd588ec]
> 2007-07-12 21:57:43 C [common-utils.c:210:gf_print_trace] debug-backtrace:
> /usr/local/lib/glusterfs/1.3.0-pre5.2/xlator/performance/io-threads.so[0xa1922a]
> 2007-07-12 21:57:43 C [common-utils.c:210:gf_print_trace] debug-backtrace:
> /lib/tls/libpthread.so.0 [0x735371]
> 2007-07-12 21:57:43 C [common-utils.c:210:gf_print_trace] debug-backtrace:
> /lib/tls/libc.so.6(__clone+0x5e) [0x67dffe]
>
> bt:
> This GDB was configured as "i386-redhat-linux-gnu"...Using host
> libthread_db
> library "/lib/tls/libthread_db.so.1".
>
> Core was generated by `[glusterfs]
> [snip]
> (gdb) bt
> #0  0x0061ed58 in strcmp () from /lib/tls/libc.so.6
> #1  0x00d4e577 in _dict_lookup (this=Variable "this" is not available.
> ) at ../../../libglusterfs/src/dict.c:125
> #2  0x00d4e885 in dict_get (this=0xa563d98, key=0x9ded328 "client02") at
> ../../../libglusterfs/src/dict.c:185
> #3  0x00117a4b in unify_flush (frame=0xb5606d28, this=0x9ded848,
> fd=0xa8a6540)
> at ../../../../../xlators/cluster/unify/src/unify.c:2778
> #4  0x00a1807a in iot_flush_wrapper (frame=0xb59a3648, this=0xeeeeeeee,
> fd=0xa8a6540) at
> ../../../../../xlators/performance/io-threads/src/io-threads.c:319
> #5  0x00d588ec in call_resume (stub=0xb597f1d8) at
> ../../../libglusterfs/src/call-stub.c:1812
> #6  0x00a1922a in iot_worker (arg=0x9dff1e0) at
> ../../../../../xlators/performance/io-threads/src/io-threads.c:1012
> #7  0x00735371 in start_thread () from /lib/tls/libpthread.so.0
> #8  0x0067dffe in clone () from /lib/tls/libc.so.6
>
> client spec:
> volume ns
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host nfs-deb-03
>    option remote-subvolume ns
> end-volume
>
> volume client01
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host nfs-deb-03
>    option remote-subvolume brick01
> end-volume
>
> volume client02
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host nfs-deb-03
>    option remote-subvolume brick02
> end-volume
>
> volume client03
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host nfs-deb-03
>    option remote-subvolume brick03
> end-volume
>
> volume client31
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host nfs-deb-03
>    option remote-subvolume brick31
> end-volume
>
> volume export
>    type cluster/unify
>    subvolumes client01 client02 client03 client31
>    option namespace ns
>    option scheduler alu
>    option alu.limits.min-free-disk 1GB
>    option alu.order
> disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
> end-volume
>
> volume iothreads
>    type performance/io-threads
>    option thread-count 4
>    option cache-size 16MB
>    subvolumes export
> end-volume
>
> volume readahead
>    type performance/read-ahead
>    option page-size 4096
>    option page-count 16
>    subvolumes iothreads
> end-volume
>
> volume writeback
>    type performance/write-behind
>    option aggregate-size 131072
>    option flush-behind on
>    subvolumes readahead
> end-volume
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



-- 
Anand V. Avati



More information about the Gluster-devel mailing list