[Gluster-users] Transport endpoint not connected
Anand Avati
anand.avati at gmail.com
Wed Apr 28 16:53:27 UTC 2010
Joe,
Do you have access to the core dump from the crash? If you do,
please post the output of 'thread apply all bt full' within gdb on the
core.
Thanks,
Avati
On Wed, Apr 28, 2010 at 2:26 PM, Joe Warren-Meeks
<joe at encoretickets.co.uk> wrote:
> Hey guys,
>
> Any clues or pointers with this problem? It's occurring every 6 hours or
> so.. Anything else I can do to help debug it?
>
> Kind regards
>
> -- joe.
>
>
>> -----Original Message-----
>> From: gluster-users-bounces at gluster.org [mailto:gluster-users-
>> bounces at gluster.org] On Behalf Of Joe Warren-Meeks
>> Sent: 26 April 2010 12:31
>> To: Vijay Bellur
>> Cc: gluster-users at gluster.org
>> Subject: Re: [Gluster-users] Transport endpoint not connected
>>
>> Here is the relevant crash section:
>>
>> patchset: v3.0.4
>> signal received: 11
>> time of crash: 2010-04-23 21:40:40
>> configuration details:
>> argp 1
>> backtrace 1
>> dlfcn 1
>> fdatasync 1
>> libpthread 1
>> llistxattr 1
>> setfsid 1
>> spinlock 1
>> epoll.h 1
>> xattr.h 1
>> st_atim.tv_nsec 1
>> package-string: glusterfs 3.0.4
>> /lib/libc.so.6[0x7ffd0d809100]
>> /usr/local/lib/glusterfs/3.0.4/xlator/performance/read-
>> ahead.so(ra_fstat
>> +0x82)[0
>> x7ffd0c968d22]
>> /usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7ffd0df7411b]
>> /usr/local/lib/glusterfs/3.0.4/xlator/performance/quick-
>> read.so(qr_fstat
>> +0x113)[
>> 0x7ffd0c5570a3]
>> /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
>> behind.so(wb_fst
>> at_helpe
>> r+0xcb)[0x7ffd0c346adb]
>> /usr/local/lib/libglusterfs.so.0(call_resume+0x390)[0x7ffd0df7cf60]
>> /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
>> behind.so(wb_res
>> ume_othe
>> r_requests+0x58)[0x7ffd0c349938]
>> /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
>> behind.so(wb_pro
>> cess_que
>> ue+0xe1)[0x7ffd0c348251]
>> /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
>> behind.so(wb_fst
>> at+0x20a
>> )[0x7ffd0c34a87a]
>> /usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7ffd0df7411b]
>> /usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7ffd0bf23a36]
>> /usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7ffd0bf246b6]
>> /lib/libpthread.so.0[0x7ffd0db3f3f7]
>> /lib/libc.so.6(clone+0x6d)[0x7ffd0d8aeb4d]
>>
>> And Startup section:
>>
>> ---------
>>
> =======================================================================
>> =
>> ========
>> Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
>> git: v3.0.4
>> Starting Time: 2010-04-26 10:00:59
>> Command line : /usr/local/sbin/glusterfs --log-level=NORMAL
>> --volfile=/etc/glust
>> erfs/repstore1-tcp.vol /data/import
>> PID : 5910
>> System name : Linux
>> Nodename : w2
>> Kernel Release : 2.6.24-27-server
>> Hardware Identifier: x86_64
>>
>> Given volfile:
>>
> +----------------------------------------------------------------------
>> -
>> -------+
>> 1: ## file auto generated by /usr/local/bin/glusterfs-volgen
>> (mount.vol)
>> 2: # Cmd line:
>> 3: # $ /usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
>> 10.10.130.11:/data/export 10.10.130.12:/data/export
>> 4:
>> 5: # RAID 1
>> 6: # TRANSPORT-TYPE tcp
>> 7: volume 10.10.130.12-1
>> 8: type protocol/client
>> 9: option transport-type tcp
>> 10: option remote-host 10.10.130.12
>> 11: option transport.socket.nodelay on
>> 12: option transport.remote-port 6996
>> 13: option remote-subvolume brick1
>> 14: end-volume
>> 15:
>> 16: volume 10.10.130.11-1
>> 17: type protocol/client
>> 18: option transport-type tcp
>> 19: option remote-host 10.10.130.11
>> 20: option transport.socket.nodelay on
>> 21: option transport.remote-port 6996
>> 22: option remote-subvolume brick1
>> 23: end-volume
>> 24:
>> 25: volume mirror-0
>> 26: type cluster/replicate
>> 27: subvolumes 10.10.130.11-1 10.10.130.12-1
>> 28: end-volume
>> 29:
>> 30: volume readahead
>> 31: type performance/read-ahead
>> 32: option page-count 4
>> 33: subvolumes mirror-0
>> 34: end-volume
>> 35:
>> 36: volume iocache
>> 37: type performance/io-cache
>> 38: option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo |
>> sed 's/[^0-9]//g') / 5120 ))`MB
>> 39: option cache-timeout 1
>> 40: subvolumes readahead
>> 41: end-volume
>> 42:
>> 43: volume quickread
>> 44: type performance/quick-read
>> 45: option cache-timeout 1
>> 46: option max-file-size 64kB
>> 47: subvolumes iocache
>> 48: end-volume
>> 49:
>> 50: volume writebehind
>> 51: type performance/write-behind
>> 52: option cache-size 4MB
>> 53: subvolumes quickread
>> 54: end-volume
>> 55:
>> 56: volume statprefetch
>> 57: type performance/stat-prefetch
>> 58: subvolumes writebehind
>> 59: end-volume
>> 60:
>>
>> > -----Original Message-----
>> > From: Vijay Bellur [mailto:vijay at gluster.com]
>> > Sent: 22 April 2010 18:40
>> > To: Joe Warren-Meeks
>> > Cc: gluster-users at gluster.org
>> > Subject: Re: [Gluster-users] Transport endpoint not connected
>> >
>> > Hi Joe,
>> >
>> > Can you please share the complete client log file?
>> >
>> > Thanks,
>> > Vijay
>> >
>> >
>> > Joe Warren-Meeks wrote:
>> > > Hey guys,
>> > >
>> > >
>> > >
>> > > I've recently implemented gluster to share webcontent read-write
>> > between
>> > > two servers.
>> > >
>> > >
>> > >
>> > > Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
>> > >
>> > > Fuse : 2.7.2-1ubuntu2.1
>> > >
>> > > Platform : ubuntu 8.04LTS
>> > >
>> > >
>> > >
>> > > I used the following command to generate my configs:
>> > >
>> > > /usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
>> > > 10.10.130.11:/data/export 10.10.130.12:/data/export
>> > >
>> > >
>> > >
>> > > And mount them on each of the servers as so:
>> > >
>> > > /etc/fstab:
>> > >
>> > > /etc/glusterfs/repstore1-tcp.vol /data/import glusterfs
> defaults
>> > 0
>> > > 0
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > Every 12 hours or so, one or other of the servers will lose the
>> mount
>> > > and error with:
>> > >
>> > > df: `/data/import': Transport endpoint is not connected
>> > >
>> > >
>> > >
>> > > And I get the following in my logfile:
>> > >
>> > > patchset: v3.0.4
>> > >
>> > > signal received: 11
>> > >
>> > > time of crash: 2010-04-22 11:41:10
>> > >
>> > > configuration details:
>> > >
>> > > argp 1
>> > >
>> > > backtrace 1
>> > >
>> > > dlfcn 1
>> > >
>> > > fdatasync 1
>> > >
>> > > libpthread 1
>> > >
>> > > llistxattr 1
>> > >
>> > > setfsid 1
>> > >
>> > > spinlock 1
>> > >
>> > > epoll.h 1
>> > >
>> > > xattr.h 1
>> > >
>> > > st_atim.tv_nsec 1
>> > >
>> > > package-string: glusterfs 3.0.4
>> > >
>> > > /lib/libc.so.6[0x7f2eca39a100]
>> > >
>> > > /usr/local/lib/glusterfs/3.0.4/xlator/performance/read-
>> > ahead.so(ra_fstat
>> > > +0x82
>> > >
>> > > )[0x7f2ec94f9d22]
>> > >
>> > >
>> /usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7f2ecab0511b]
>> > >
>> > > /usr/local/lib/glusterfs/3.0.4/xlator/performance/quick-
>> > read.so(qr_fstat
>> > > +0x11
>> > >
>> > > 3)[0x7f2ec90e80a3]
>> > >
>> > > /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
>> > behind.so(wb_fst
>> > > at_he
>> > >
>> > > lper+0xcb)[0x7f2ec8ed7adb]
>> > >
>> > >
> /usr/local/lib/libglusterfs.so.0(call_resume+0x390)[0x7f2ecab0df60]
>> > >
>> > > /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
>> > behind.so(wb_res
>> > > ume_o
>> > >
>> > > ther_requests+0x58)[0x7f2ec8eda938]
>> > >
>> > > /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
>> > behind.so(wb_pro
>> > > cess_queue+0xe1)[0x7f2ec8ed9251]
>> > >
>> > > /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
>> > behind.so(wb_fst
>> > > at+0x20a)[0x7f2ec8edb87a]
>> > >
>> > >
>> /usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7f2ecab0511b]
>> > >
>> > >
> /usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7f2ec8ab4a36]
>> > >
>> > >
> /usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7f2ec8ab56b6]
>> > >
>> > > /lib/libpthread.so.0[0x7f2eca6d03f7]
>> > >
>> > > /lib/libc.so.6(clone+0x6d)[0x7f2eca43fb4d]
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > If I umount and remount, things work again, but it isn't ideal..
>> > >
>> > >
>> > >
>> > > Any clues, pointers, hints?
>> > >
>> > >
>> > >
>> > > Kind regards
>> > >
>> > >
>> > >
>> > > -- joe.
>> > >
>> > >
>> > >
>> > > Joe Warren-Meeks
>> > >
>> > > Director Of Systems Development
>> > >
>> > > ENCORE TICKETS LTD
>> > >
>> > > Encore House, 50-51 Bedford Row, London WC1R 4LR
>> > >
>> > > Direct line: +44 (0)20 7492 1506
>> > >
>> > > Reservations: +44 (0)20 7492 1500
>> > >
>> > > Fax: +44 (0)20 7831 4410
>> > >
>> > > Email: joe at encoretickets.co.uk
>> > > <mailto:joe at encoretickets.co.uk>
>> > >
>> > > web: www.encoretickets.co.uk
>> > > <http://www.encoretickets.co.uk/>
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > Copyright in this message and any attachments remains with us. It
>> is
>> > > confidential and may be legally privileged. If this message is not
>> > > intended for you it must not be read, copied or used by you or
>> > disclosed
>> > > to anyone else. Please advise the sender immediately if you have
>> > > received this message in error. Although this message and any
>> > > attachments are believed to be free of any virus or other defect
>> that
>> > > might affect any computer system into which it is received and
>> opened
>> > it
>> > > is the responsibility of the recipient to ensure that it is virus
>> > free
>> > > and no responsibility is accepted by Encore Tickets Limited for
> any
>> > loss
>> > > or damage in any way arising from its use.
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> ---------------------------------------------------------------------
>> > ---
>> > >
>> > > _______________________________________________
>> > > Gluster-users mailing list
>> > > Gluster-users at gluster.org
>> > > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> > >
>> >
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list