[Gluster-users] gluster write million of lines: WRITE => -1 (Transport endpoint is not connected)

Sergio Traldi sergio.traldi at pd.infn.it
Mon Oct 27 13:51:37 UTC 2014


Hi all,
One server Redhat 6 with this rpms set:

[ ~]# rpm -qa | grep gluster | sort
glusterfs-3.5.2-1.el6.x86_64
glusterfs-api-3.5.2-1.el6.x86_64
glusterfs-cli-3.5.2-1.el6.x86_64
glusterfs-fuse-3.5.2-1.el6.x86_64
glusterfs-geo-replication-3.5.2-1.el6.x86_64
glusterfs-libs-3.5.2-1.el6.x86_64
glusterfs-server-3.5.2-1.el6.x86_64

I have a gluster volume with 1 server and 1 brick:

[ ~]# gluster volume info volume-nova-pp
Volume Name: volume-nova-pp
Type: Distribute
Volume ID: b5ec289b-9a54-4df1-9c21-52ca556aeead
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.61.100:/brick-nova-pp/mpathc
Options Reconfigured:
storage.owner-gid: 162
storage.owner-uid: 162

There are four clients attached to this volume with same O.S. and same 
fuse gluster rpms set:
[ ~]# rpm -qa | grep gluster | sort
glusterfs-3.5.0-2.el6.x86_64
glusterfs-api-3.5.0-2.el6.x86_64
glusterfs-fuse-3.5.0-2.el6.x86_64
glusterfs-libs-3.5.0-2.el6.x86_6

Last week, but it happens also two weeks ago, I found the disk almost 
full and I found the gluster logs 
/var/log/glusterfs/var-lib-nova-instances.log of 68GB:
In the log there was the starting problem:

[2014-10-10 07:29:43.730792] W [socket.c:522:__socket_rwv] 0-glusterfs: 
readv on 192.168.61.100:24007 failed (No data available)
[2014-10-10 07:29:54.022608] E [socket.c:2161:socket_connect_finish] 
0-glusterfs: connection to 192.168.61.100:24007 failed (Connection refused)
[2014-10-10 07:30:05.271825] W 
[client-rpc-fops.c:866:client3_3_writev_cbk] 0-volume-nova-pp-client-0: 
remote operation failed: Input/output error
[2014-10-10 07:30:08.783145] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 3661260: WRITE => -1 (Input/output error)
[2014-10-10 07:30:08.783368] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 3661262: WRITE => -1 (Input/output error)
[2014-10-10 07:30:08.806553] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 3661649: WRITE => -1 (Input/output error)
[2014-10-10 07:30:08.844415] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 3662235: WRITE => -1 (Input/output error)

and a lot of these lines:

[2014-10-15 14:41:15.895105] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 951700230: WRITE => -1 (Transport endpoint is not 
connected)
[2014-10-15 14:41:15.896205] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 951700232: WRITE => -1 (Transport endpoint is not 
connected)

This second line log with different "sector" number has been written 
every millisecond so in about 1 minute we have 1GB write in O.S. disk.

I search for a solution but I didn't find nobody having the same problem.

I think there was a network problem  but why does gluster write in logs 
million of:
[2014-10-15 14:41:15.895105] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 951700230: WRITE => -1 (Transport endpoint is not 
connected) ?

Thanks in advance.
Cheers
Sergio


More information about the Gluster-users mailing list