[Gluster-users] Transport endpoint is not connected failures in

Nithya Balachandran nbalacha at redhat.com
Thu Mar 28 03:12:47 UTC 2019


On Wed, 27 Mar 2019 at 21:47, <brandon at thinkhuge.net> wrote:

> Hello Amar and list,
>
>
>
> I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the
> “Transport endpoint is not connected failures” for us.
>
>
>
> We did not have any of these failures in this past weekend backups cycle.
>
>
>
> Thank you very much for fixing whatever was the problem.
>
>
>
> I also removed some volume config options.  One or more of the settings
> was contributing to the slow directory listing.
>

Hi Brandon,

Which options were removed?

Thanks,
Nithya

>
>
> Here is our current volume info.
>
>
>
> [root at lonbaknode3 ~]# gluster volume info
>
>
>
> Volume Name: volbackups
>
> Type: Distribute
>
> Volume ID: 32bf4fe9-5450-49f8-b6aa-05471d3bdffa
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 8
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: lonbaknode3.domain.net:/lvbackups/brick
>
> Brick2: lonbaknode4.domain.net:/lvbackups/brick
>
> Brick3: lonbaknode5.domain.net:/lvbackups/brick
>
> Brick4: lonbaknode6.domain.net:/lvbackups/brick
>
> Brick5: lonbaknode7.domain.net:/lvbackups/brick
>
> Brick6: lonbaknode8.domain.net:/lvbackups/brick
>
> Brick7: lonbaknode9.domain.net:/lvbackups/brick
>
> Brick8: lonbaknode10.domain.net:/lvbackups/brick
>
> Options Reconfigured:
>
> performance.io-thread-count: 32
>
> performance.client-io-threads: on
>
> client.event-threads: 8
>
> diagnostics.brick-sys-log-level: WARNING
>
> diagnostics.brick-log-level: WARNING
>
> performance.cache-max-file-size: 2MB
>
> performance.cache-size: 256MB
>
> cluster.min-free-disk: 1%
>
> nfs.disable: on
>
> transport.address-family: inet
>
> server.event-threads: 8
>
> [root at lonbaknode3 ~]#
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190328/4942e51c/attachment.html>


More information about the Gluster-users mailing list