[Gluster-users] core

Amar (ಅಮರ್ ತುಂಬಳ್ಳಿ) amarts at gmail.com
Sun Feb 22 08:57:33 UTC 2009


Hi Nathan,
 Thanks for the report. Fix should be available in next release. Currently
fixed by Avati in repository.

Regards,
Amar

2009/2/21 Nathan Stratton <nathan at robotics.net>

>
> http://share.robotics.net/core.6006
>
> [root at xen0 /]#
> ================================================================================
> Version      : glusterfs 2.0.0tla built on Feb 21 2009 12:11:37
> TLA Revision : glusterfs--mainline--3.0--patch-928
> Starting Time: 2009-02-21 19:57:20
> Command line : glusterfsd
> PID          : 6096
> System name  : Linux
> Nodename     : xen0.nyc.blinkmind.net
> Kernel Release : 2.6.18-92.1.22.el5xen
> Hardware Identifier: x86_64
>
> Given volfile:
>
> +------------------------------------------------------------------------------+
>  1: volume sdb2
>  2:  type storage/posix
>  3:  option directory /sdb2
>  4: end-volume
>  5:
>  6: volume sdb3
>  7:  type storage/posix
>  8:  option directory /sdb3
>  9: end-volume
>  10:
>  11: volume ns
>  12:  type storage/posix
>  13:  option directory /export-ns/
>  14: end-volume
>  15:
>  16: volume sdb2-locks
>  17:   type features/posix-locks
>  18:   subvolumes sdb2
>  19: end-volume
>  20:
>  21: volume sdb3-locks
>  22:   type features/posix-locks
>  23:   subvolumes sdb3
>  24: end-volume
>  25:
>  26: volume brick-ns
>  27:   type features/posix-locks
>  28:   subvolumes ns
>  29: end-volume
>  30:
>  31: volume sdb2-iothreads
>  32:  type performance/io-threads
>  33:  subvolumes sdb2-locks
>  34: end-volume
>  35:
>  36: volume sdb3-iothreads
>  37:  type performance/io-threads
>  38:  subvolumes sdb3-locks
>  39: end-volume
>  40:
>  41: volume sdb2-writebehind
>  42:   type performance/write-behind
>  43:   subvolumes sdb2-iothreads
>  44: end-volume
>  45:
>  46: volume sdb3-writebehind
>  47:   type performance/write-behind
>  48:   subvolumes sdb3-iothreads
>  49: end-volume
>  50:
>  51: volume sdb2-readahead
>  52:   type performance/read-ahead
>  53:   option page-size 1MB
>  54:   option page-count 2
>  55:   subvolumes sdb2-writebehind
>  56: end-volume
>  57:
>  58: volume sdb3-readahead
>  59:   type performance/read-ahead
>  60:   option page-size 1MB
>  61:   option page-count 2
>  62:   subvolumes sdb3-writebehind
>  63: end-volume
>  64:
>  65: # Server settings
>  66: volume server
>  67:  type protocol/server
>  68:  option transport-type ib-verbs/server
>  69:  subvolumes sdb2-readahead sdb3-readahead brick-ns
>  70:  option auth.addr.sdb2-readahead.allow *
>  71:  option auth.addr.sdb3-readahead.allow *
>  72:  option auth.addr.brick-ns.allow *
>  73: end-volume
>
>
> +------------------------------------------------------------------------------+
> 2009-02-21 19:57:20 N [glusterfsd.c:1118:main] glusterfs: Successfully
> started
> 2009-02-21 20:01:23 E [ib-verbs.c:979:__tcp_rwv] server: readv failed
> (Connection reset by peer)
> 2009-02-21 20:01:23 E [ib-verbs.c:213:__ib_verbs_disconnect]
> transport/ib-verbs: shutdown () - error: Transport endpoint is not connected
> 2009-02-21 20:01:23 N [server-protocol.c:7934:notify] server:
> 172.16.0.220:1023 disconnected
> 2009-02-21 20:01:23 E [ib-verbs.c:979:__tcp_rwv] server: readv failed
> (Connection reset by peer)
> 2009-02-21 20:01:23 E [ib-verbs.c:213:__ib_verbs_disconnect]
> transport/ib-verbs: shutdown () - error: Transport endpoint is not connected
> 2009-02-21 20:01:23 N [server-protocol.c:7934:notify] server:
> 172.16.0.220:1022 disconnected
> 2009-02-21 20:01:35 N [server-protocol.c:7181:mop_setvolume] server:
> accepted client from 172.16.0.220:1010
> 2009-02-21 20:01:35 N [server-protocol.c:7181:mop_setvolume] server:
> accepted client from 172.16.0.220:1011
> 2009-02-21 20:01:35 N [server-protocol.c:7181:mop_setvolume] server:
> accepted client from 172.16.0.220:1016
> 2009-02-21 20:01:35 N [server-protocol.c:7181:mop_setvolume] server:
> accepted client from 172.16.0.220:1017
> 2009-02-21 20:01:35 N [server-protocol.c:7181:mop_setvolume] server:
> accepted client from 172.16.0.220:1022
> 2009-02-21 20:01:35 N [server-protocol.c:7181:mop_setvolume] server:
> accepted client from 172.16.0.220:1023
> pending frames:
> frame : type(1) op(FLUSH)
>
> patchset: glusterfs--mainline--3.0--patch-928
> signal received: 11
> configuration details:argp 1
> backtrace 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 2.0.0tla
> /lib64/libc.so.6[0x2b0d6e54f1b0]
> /usr/local/lib/libglusterfs.so.0(fd_ctx_get+0x24)[0x2b0d6dee10a4]
> /usr/local/lib/glusterfs/2.0.0tla/xlator/storage/posix.so[0x2b0d6e88216e]
> /usr/local/lib/glusterfs/2.0.0tla/xlator/storage/posix.so[0x2b0d6e88228d]
>
> /usr/local/lib/glusterfs/2.0.0tla/xlator/storage/posix.so(posix_xattr_cache_flush_all+0x5b)[0x2b0d6e8822fb]
>
> /usr/local/lib/glusterfs/2.0.0tla/xlator/storage/posix.so(posix_flush+0x61)[0x2b0d6e879251]
>
> /usr/local/lib/glusterfs/2.0.0tla/xlator/features/posix-locks.so(pl_flush+0x1b3)[0x2b0d6ea8b763]
>
> /usr/local/lib/glusterfs/2.0.0tla/xlator/performance/io-threads.so[0x2b0d6ec91a8e]
> /usr/local/lib/libglusterfs.so.0(call_resume+0x42f)[0x2b0d6ded9e5f]
>
> /usr/local/lib/glusterfs/2.0.0tla/xlator/performance/io-threads.so[0x2b0d6ec90e8d]
> /lib64/libpthread.so.0[0x2b0d6e30a2f7]
> /lib64/libc.so.6(clone+0x6d)[0x2b0d6e5f0e3d]
> ---------
>
>
>  <>
>>
> Nathan Stratton                                CTO, BlinkMind, Inc.
> nathan at robotics.net                         nathan at blinkmind.com
> http://www.robotics.net                        http://www.blinkmind.com
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>


-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090222/c0579bf0/attachment.html>


More information about the Gluster-users mailing list