[Gluster-users] glusterfsd crashed

Anand Avati avati at gluster.com
Thu Apr 2 11:09:33 UTC 2009


This bug is being fixed in the repository. You can disable
flush-behind on the server spec file to not trigger the bug, till the
next release is made available.

Avati

On Thu, Apr 2, 2009 at 2:41 PM, Greg <greg at easyflirt.com> wrote:
> Shehjar Tikoo a écrit :
>>
>> Hi Greg
>>
>> Can you try running a test with the same config but without the
>> io-threads translator? It has undergone some changes lately that might
>> be the cause here. However, cursory look at the code suggests something
>> else. Still, no harm in eliminating one potential reason.
>
> Hi Shehjar,
>
> I've tried, same crash on both servers :
>
> pending frames:
>
> patchset: 4e5c297d7c3480d0d3ab1c0c2a184c6a4fb801ef
> signal received: 11
> configuration details:argp 1
> backtrace 1
> bdb->cursor->get 1
> db.h 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 2.0.0rc7
> /lib/libc.so.6[0x7f5dd9794f60]
> /usr/lib/glusterfs/2.0.0rc7/xlator/features/posix-locks.so(pl_inode_get+0x7a)[0x7f5dd934ac8a]
> /usr/lib/glusterfs/2.0.0rc7/xlator/features/posix-locks.so(pl_flush+0x29)[0x7f5dd934c339]
> /usr/lib/libglusterfs.so.0(default_flush+0xaa)[0x7f5dd9eebeba]
> /usr/lib/glusterfs/2.0.0rc7/xlator/performance/write-behind.so(wb_flush+0x268)[0x7f5dd8f3b088]
> /usr/lib/glusterfs/2.0.0rc7/xlator/performance/read-ahead.so(ra_flush+0xe0)[0x7f5dd8d30b30]
> /usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so(server_release+0xf9)[0x7f5dd8b1ae79]
> /usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so(protocol_server_pollin+0xa6)[0x7f5dd8b157d6]
> /usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so(notify+0x38)[0x7f5dd8b15818]
> /usr/lib/glusterfs/2.0.0rc7/transport/socket.so(socket_event_handler+0xe0)[0x7f5dd8908b80]
> /usr/lib/libglusterfs.so.0[0x7f5dd9efe1ef]
> /usr/sbin/glusterfsd(main+0xa81)[0x403a21]
> /lib/libc.so.6(__libc_start_main+0xe6)[0x7f5dd97811a6]
> /usr/sbin/glusterfsd[0x402519]
> ---------
> ================================================================================
> Version      : glusterfs 2.0.0rc7 built on Apr  1 2009 15:21:00
> TLA Revision : 4e5c297d7c3480d0d3ab1c0c2a184c6a4fb801ef
> Starting Time: 2009-04-02 11:10:03
> Command line : /usr/sbin/glusterfsd -p /var/run/glusterfsd.pid -f
> /etc/glusterfs/glusterfsd.vol
> PID          : 16858
> System name  : Linux
> Nodename     : filer-04
> Kernel Release : 2.6.26-1-amd64
> Hardware Identifier: x86_64
>
> Given volfile:
> +------------------------------------------------------------------------------+
>  1: # file: /etc/glusterfs/glusterfsd.vol
>  2:
>  3: #
>  4: # Volumes
>  5: #
>  6: volume media-small
>  7:    type storage/posix
>  8:    option directory /var/local/glusterfs/media_small
>  9: end-volume
> 10:
> 11: volume media-medium
> 12:    type storage/posix
> 13:    option directory /var/local/glusterfs/media_medium
> 14: end-volume
> 15:
> 16: # Lock posix
> 17: volume media-small-locks
> 18:    type features/posix-locks
> 19:    option mandatory-locks on
> 20:    subvolumes media-small
> 21: #  subvolumes trash # enable this if you need trash can support (NOTE:
> not present in 1.3.0-pre5+ releases)
> 22: end-volume
> 23:
> 24: volume media-medium-locks
> 25:    type features/posix-locks
> 26:    option mandatory-locks on
> 27:    subvolumes media-medium
> 28: #  subvolumes trash # enable this if you need trash can support (NOTE:
> not present in 1.3.0-pre5+ releases)
> 29: end-volume
> 30:
> 31:
> 32: #
> 33: # Performance
> 34: #
> 35: #volume media-small-iot
> 36: #  type performance/io-threads
> 37: #  subvolumes media-small-locks
> 38: #  option thread-count 4 # default value is 1
> 39: #end-volume
> 40:
> 41: volume media-small-ioc
> 42:    type performance/io-cache
> 43:    option cache-size 128MB         # default is 32MB
> 44:    option page-size 128KB          # default is 128KB
> 45: #  subvolumes media-small-iot
> 46:    subvolumes media-small-locks
> 47: end-volume
> 48:
> 49: volume media-small-wb
> 50:    type performance/write-behind
> 51:    option flush-behind on          # default is off
> 52:    subvolumes media-small-ioc
> 53: end-volume
> 54:
> 55: volume media-small-ra
> 56:    type performance/read-ahead
> 57:    subvolumes media-small-wb
> 58:    option page-size 256KB          # default is 256KB
> 59:    option page-count 4             # default is 2 - cache per file =
> (page-count x page-size)
> 60:    option force-atime-update no    # defalut is 'no'
> 61: end-volume
> 62:
> 63:
> 64: #volume media-medium-iot
> 65: #  type performance/io-threads
> 66: #  subvolumes media-medium-locks
> 67: #  option thread-count 4 # default value is 1
> 68: #end-volume
> 69:
> 70: volume media-medium-ioc
> 71:    type performance/io-cache
> 72:    option cache-size 128MB         # default is 32MB
> 73:    option page-size 128KB          # default is 128KB
> 74: #  subvolumes media-medium-iot
> 75:    subvolumes media-medium-locks
> 76: end-volume
> 77:
> 78: volume media-medium-wb
> 79:    type performance/write-behind
> 80:    option flush-behind on          # default is off
> 81:    subvolumes media-medium-ioc
> 82: end-volume
> 83:
> 84: volume media-medium-ra
> 85:    type performance/read-ahead
> 86:    subvolumes media-medium-wb
> 87:    option page-size 256KB          # default is 256KB
> 88:    option page-count 4             # default is 2 - cache per file =
> (page-count x page-size)
> 89:    option force-atime-update no    # defalut is 'no'
> 90: end-volume
> 91:
> 92:
> 93:
> 94:
> 95: #
> 96: # Serveur
> 97: #
> 98: volume server
> 99:    type protocol/server
> 100:    option transport-type tcp/server
> 101:    option auth.addr.media-small-ra.allow 10.0.*.*
> 102:    option auth.addr.media-medium-ra.allow 10.0.*.*
> 103:    # Autoconfiguration, e.g. :
> 104:    # glusterfs -l /tmp/glusterfs.log --server=filer-04 ./Cache
> 105:    option client-volume-filename /etc/glusterfs/glusterfs.vol
> 106:    subvolumes media-small-ra media-medium-ra # volumes exportés
> 107: end-volume
> 108:
>
> +------------------------------------------------------------------------------+
> 2009-04-02 11:10:03 N [glusterfsd.c:1134:main] glusterfs: Successfully
> started
>
> --
> Greg
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>




More information about the Gluster-users mailing list