[Gluster-users] Glusterfs gives up with endpoint not connected
Daniel Müller
mueller at tropenklinik.de
Thu Mar 28 10:18:16 UTC 2013
Dear all,
Right out of the blue glusterfs is not working fine any more every now end
the it stops working telling me,
Endpoint not connected and writing core files:
[root at tuepdc /]# file core.15288
core.15288: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV),
SVR4-style, from 'glusterfs'
My Version:
[root at tuepdc /]# glusterfs --version
glusterfs 3.2.0 built on Apr 22 2011 18:35:40
Repository revision: v3.2.0
Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License.
My /var/log/glusterfs/bricks/raid5hs-glusterfs-export.log
[2013-03-28 10:47:07.243980] I [server.c:438:server_rpc_notify]
0-sambavol-server: disconnected connection from 192.168.130.199:1023
[2013-03-28 10:47:07.244000] I
[server-helpers.c:783:server_connection_destroy] 0-sambavol-server:
destroyed connection of
tuepdc.local-16600-2013/03/28-09:32:28:258428-sambavol-client-0
[root at tuepdc bricks]# gluster volume info
Volume Name: sambavol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.130.199:/raid5hs/glusterfs/export
Brick2: 192.168.130.200:/raid5hs/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5
performance.quick-read: on
Gluster is running on ext3 raid5 HS on both hosts
[root at tuepdc bricks]# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Wed May 11 10:08:30 2011
Raid Level : raid5
Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Mar 28 11:13:21 2013
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : c484e093:018a2517:56e38f5e:1a216491
Events : 0.250
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1
2 8 97 2 active sync /dev/sdg1
3 8 81 - spare /dev/sdf1
[root at tuepdc glusterfs]# tail -f mnt-glusterfs.log
[2013-03-28 10:57:40.882566] I [rpc-clnt.c:1531:rpc_clnt_reconfig]
0-sambavol-client-0: changing port to 24009 (from 0)
[2013-03-28 10:57:40.883636] I [rpc-clnt.c:1531:rpc_clnt_reconfig]
0-sambavol-client-1: changing port to 24009 (from 0)
[2013-03-28 10:57:44.806649] I
[client-handshake.c:1080:select_server_supported_programs]
0-sambavol-client-0: Using Program GlusterFS-3.1.0, Num (1298437), Version
(310)
[2013-03-28 10:57:44.806857] I [client-handshake.c:913:client_setvolume_cbk]
0-sambavol-client-0: Connected to 192.168.130.199:24009, attached to remote
volume '/raid5hs/glusterfs/export'.
[2013-03-28 10:57:44.806876] I [afr-common.c:2514:afr_notify]
0-sambavol-replicate-0: Subvolume 'sambavol-client-0' came back up; going
online.
[2013-03-28 10:57:44.811557] I [fuse-bridge.c:3316:fuse_graph_setup] 0-fuse:
switched to graph 0
[2013-03-28 10:57:44.811773] I [fuse-bridge.c:2897:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel
7.10
[2013-03-28 10:57:44.812139] I [afr-common.c:836:afr_fresh_lookup_cbk]
0-sambavol-replicate-0: added root inode
[2013-03-28 10:57:44.812217] I
[client-handshake.c:1080:select_server_supported_programs]
0-sambavol-client-1: Using Program GlusterFS-3.1.0, Num (1298437), Version
(310)
[2013-03-28 10:57:44.812767] I [client-handshake.c:913:client_setvolume_cbk]
0-sambavol-client-1: Connected to 192.168.130.200:24009, attached to remote
volume '/raid5hs/glusterfs/export'.
How can I fix this issue!??
Daniel
-----------------------------------------------
EDV Daniel Müller
Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: mueller at tropenklinik.de
Internet: www.tropenklinik.de
-----------------------------------------------
More information about the Gluster-users
mailing list