[Gluster-users] iobuf/iobref error

Poornima Gurusiddaiah pgurusid at redhat.com
Tue Aug 23 04:38:03 UTC 2016


Hi, 

The error that you see in the log file, is fixed as a part of patch http://review.gluster.org/#/c/10206/ (release 3.8.0) 
But these errors are not responsible for the "Transport endpoint not connected issues." Can you check if there are any other errors reported in the log? 

Regards, 
Poornima 

----- Original Message -----

> From: "ngsflow" <ngsflow at hygenomics.com>
> To: "gluster-users" <gluster-users at gluster.org>
> Sent: Sunday, August 21, 2016 7:53:38 PM
> Subject: [Gluster-users] iobuf/iobref error

> Hi:

> I'v been experiencing an intermittent issue with GlusterFS in 30 nodes
> cluster which makes the mounted file system unavailable through the
> GlusterFS client.

> The symptom is:

> $ ls /gluster
> ls: cannot access /gluster: Transport endpoint is not connected

> the client log reports the following error:

> [2016-08-09 23:25:36.012877] E [iobuf.c:759:iobuf_unref] (-->
> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x371a220580] (-->
> /usr/li
> b64/glusterfs/3.6.7/xlator/performance/quick-read.so(qr_readv_cached+0xb7)[0x7ff57f318ea7]
> (--> /usr/lib64/glusterfs/3.6.7/xlator/performance/
> quick-read.so(qr_readv+0x62)[0x7ff57f3194c2] (-->
> /usr/lib64/libglusterfs.so.0(default_readv_resume+0x14d)[0x371a22a75d] (-->
> /usr/lib64/libgl
> usterfs.so.0(call_resume+0x3d6)[0x371a2424b6] ))))) 0-iobuf: invalid
> argument: iobuf
> [2016-08-09 23:25:36.013192] E [iobuf.c:865:iobref_unref] (-->
> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x371a220580] (-->
> /usr/l
> ib64/glusterfs/3.6.7/xlator/performance/quick-read.so(qr_readv_cached+0xc1)[0x7ff57f318eb1]
> (--> /usr/lib64/glusterfs/3.6.7/xlator/performance
> /quick-read.so(qr_readv+0x62)[0x7ff57f3194c2] (-->
> /usr/lib64/libglusterfs.so.0(default_readv_resume+0x14d)[0x371a22a75d] (-->
> /usr/lib64/libg
> lusterfs.so.0(call_resume+0x3d6)[0x371a2424b6] ))))) 0-iobuf: invalid
> argument: iobref

> seems to me it's the out-of-memory issue.

> info: glusterfs is configured as follows

> performance.io-thread-count: 4
> performance.cache-max-file-size: 0
> performance.write-behind-window-size: 64MB
> performance.cache-size: 4GB
> cluster.consistent-metadata: on

> and each node in cluster are deployed both glusterfs client and server.

> is there any way to ease the above issue via modify configuration? such as
> increase cache-size, or some other paramters?

> thx.

> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160823/2431f34a/attachment.html>


More information about the Gluster-users mailing list