<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=us-ascii"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        margin-bottom:.0001pt;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:#0563C1;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:#954F72;
        text-decoration:underline;}
span.EmailStyle17
        {mso-style-type:personal-compose;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-family:"Calibri",sans-serif;}
@page WordSection1
        {size:8.5in 11.0in;
        margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=EN-US link="#0563C1" vlink="#954F72"><div class=WordSection1><p class=MsoNormal>Hello list,<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>We are having critical failures under load of CentOS7 glusterfs 5.3 with our servers losing their local mount point with the issue - "Transport endpoint is not connected"<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Not sure if it is related but the logs are full of the following message.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>[2019-03-18 14:00:02.656876] E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>We operate multiple separate glusterfs distributed clusters of about 6-8 nodes. Our 2 biggest, separate, and most I/O active glusterfs clusters are both having the issues. <o:p></o:p></p><p class=MsoNormal> <o:p></o:p></p><p class=MsoNormal>We are trying to use glusterfs as a unified file system for pureftpd backup services for a VPS service. We have a relatively small backup window of the weekend where all our servers backup at the same time. When backups start early on Saturday it causes a sustained massive amount of FTP file upload I/O for around 48 hours with all the compressed backup files being uploaded. For our london 8 node cluster for example there is about 45 TB of uploads in ~48 hours currently.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>We do have some other smaller issues with directory listing under this load too but, it has been working for a couple years since 3.x until we've updated recently and randomly now all servers are losing their glusterfs mount with the "Transport endpoint is not connected" issue.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Our glusterfs servers are all mostly the same with small variations. Mostly they are supermicro E3 cpu, 16 gb ram, LSI raid10 hdd (with and without bbu). Drive arrays vary between 4-16 sata3 hdd drives each node depending on if the servers are older or newer. Firmware is kept up-to-date as well as running the latest LSI compiled driver. the newer 16 drive backup servers have 2 x 1Gbit LACP teamed interfaces also.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>[root@lonbaknode3 ~]# uname -r<o:p></o:p></p><p class=MsoNormal>3.10.0-957.5.1.el7.x86_64<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>[root@lonbaknode3 ~]# rpm -qa |grep gluster<o:p></o:p></p><p class=MsoNormal>centos-release-gluster5-1.0-1.el7.centos.noarch<o:p></o:p></p><p class=MsoNormal>glusterfs-libs-5.3-2.el7.x86_64<o:p></o:p></p><p class=MsoNormal>glusterfs-api-5.3-2.el7.x86_64<o:p></o:p></p><p class=MsoNormal>glusterfs-5.3-2.el7.x86_64<o:p></o:p></p><p class=MsoNormal>glusterfs-cli-5.3-2.el7.x86_64<o:p></o:p></p><p class=MsoNormal>glusterfs-client-xlators-5.3-2.el7.x86_64<o:p></o:p></p><p class=MsoNormal>glusterfs-server-5.3-2.el7.x86_64<o:p></o:p></p><p class=MsoNormal>glusterfs-fuse-5.3-2.el7.x86_64<o:p></o:p></p><p class=MsoNormal>[root@lonbaknode3 ~]#<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>[root@lonbaknode3 ~]# gluster volume info all<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Volume Name: volbackups<o:p></o:p></p><p class=MsoNormal>Type: Distribute<o:p></o:p></p><p class=MsoNormal>Volume ID: 32bf4fe9-5450-49f8-b6aa-05471d3bdffa<o:p></o:p></p><p class=MsoNormal>Status: Started<o:p></o:p></p><p class=MsoNormal>Snapshot Count: 0<o:p></o:p></p><p class=MsoNormal>Number of Bricks: 8<o:p></o:p></p><p class=MsoNormal>Transport-type: tcp<o:p></o:p></p><p class=MsoNormal>Bricks:<o:p></o:p></p><p class=MsoNormal>Brick1: lonbaknode3.domain.net:/lvbackups/brick<o:p></o:p></p><p class=MsoNormal>Brick2: lonbaknode4.domain.net:/lvbackups/brick<o:p></o:p></p><p class=MsoNormal>Brick3: lonbaknode5.domain.net:/lvbackups/brick<o:p></o:p></p><p class=MsoNormal>Brick4: lonbaknode6.domain.net:/lvbackups/brick<o:p></o:p></p><p class=MsoNormal>Brick5: lonbaknode7.domain.net:/lvbackups/brick<o:p></o:p></p><p class=MsoNormal>Brick6: lonbaknode8.domain.net:/lvbackups/brick<o:p></o:p></p><p class=MsoNormal>Brick7: lonbaknode9.domain.net:/lvbackups/brick<o:p></o:p></p><p class=MsoNormal>Brick8: lonbaknode10.domain.net:/lvbackups/brick<o:p></o:p></p><p class=MsoNormal>Options Reconfigured:<o:p></o:p></p><p class=MsoNormal>transport.address-family: inet<o:p></o:p></p><p class=MsoNormal>nfs.disable: on<o:p></o:p></p><p class=MsoNormal>cluster.min-free-disk: 1%<o:p></o:p></p><p class=MsoNormal>performance.cache-size: 8GB<o:p></o:p></p><p class=MsoNormal>performance.cache-max-file-size: 128MB<o:p></o:p></p><p class=MsoNormal>diagnostics.brick-log-level: WARNING<o:p></o:p></p><p class=MsoNormal>diagnostics.brick-sys-log-level: WARNING<o:p></o:p></p><p class=MsoNormal>client.event-threads: 3<o:p></o:p></p><p class=MsoNormal>performance.client-io-threads: on<o:p></o:p></p><p class=MsoNormal>performance.io-thread-count: 24<o:p></o:p></p><p class=MsoNormal>network.inode-lru-limit: 1048576<o:p></o:p></p><p class=MsoNormal>performance.parallel-readdir: on<o:p></o:p></p><p class=MsoNormal>performance.cache-invalidation: on<o:p></o:p></p><p class=MsoNormal>performance.md-cache-timeout: 600<o:p></o:p></p><p class=MsoNormal>features.cache-invalidation: on<o:p></o:p></p><p class=MsoNormal>features.cache-invalidation-timeout: 600<o:p></o:p></p><p class=MsoNormal>[root@lonbaknode3 ~]#<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Mount output shows the following:<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>lonbaknode3.domain.net:/volbackups on /home/volbackups type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>If you notice anything in our volume or mount settings above missing or otherwise bad feel free to let us know. Still learning this glusterfs. I tried searching for any recommended performance settings but, it's not always clear which setting is most applicable or beneficial to our workload.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>I have just found this post that looks like it is the same issues.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>https://lists.gluster.org/pipermail/gluster-users/2019-March/035958.html<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>We have not yet tried the suggestion of "performance.write-behind: off" but, we will do so if that is recommended.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Could someone knowledgeable advise anything for these issues? <o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>If any more information is needed do let us know.<o:p></o:p></p></div></body></html>