<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Aug 31, 2018 at 11:11 AM, Richard Neuboeck <span dir="ltr">&lt;<a href="mailto:hawk@tbi.univie.ac.at" target="_blank">hawk@tbi.univie.ac.at</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 08/31/2018 03:50 AM, Raghavendra Gowdappa wrote:<br>
&gt; +Mohit. +Milind<br>
&gt; <br>
&gt; @Mohit/Milind,<br>
&gt; <br>
&gt; Can you check logs and see whether you can find anything relevant?<br>
<br>
</span>From glances at the system logs nothing out of the ordinary<br>
occurred. However I&#39;ll start another rsync and take a closer look.<br>
It will take a few days.<br>
<span class=""><br>
&gt; <br>
&gt; On Thu, Aug 30, 2018 at 7:04 PM, Richard Neuboeck<br>
</span><span class="">&gt; &lt;<a href="mailto:hawk@tbi.univie.ac.at">hawk@tbi.univie.ac.at</a> &lt;mailto:<a href="mailto:hawk@tbi.univie.ac.at">hawk@tbi.univie.ac.at</a>&gt;<wbr>&gt; wrote:<br>
&gt; <br>
&gt;     Hi,<br>
&gt; <br>
&gt;     I&#39;m attaching a shortened version since the whole is about 5.8GB of<br>
&gt;     the client mount log. It includes the initial mount messages and the<br>
&gt;     last two minutes of log entries.<br>
&gt; <br>
&gt;     It ends very anticlimactic without an obvious error. Is there<br>
&gt;     anything specific I should be looking for?<br>
&gt; <br>
&gt; <br>
&gt; Normally I look logs around disconnect msgs to find out the reason.<br>
&gt; But as you said, sometimes one can see just disconnect msgs without<br>
&gt; any reason. That normally points to reason for disconnect in the<br>
&gt; network rather than a Glusterfs initiated disconnect.<br>
<br>
</span>The rsync source is serving our homes currently so there are NFS<br>
connections 24/7. There don&#39;t seem to be any network related<br>
interruptions </blockquote><div><br></div><div>Can you set diagnostics.client-log-level and diagnostics.brick-log-level to TRACE and check logs of both ends of connections - client and brick? To reduce the logsize, I would suggest to logrotate existing logs and start with fresh logs when you are about to start so that only relevant logs are captured. Also, can you take strace of client and brick process using:</div><div><br></div><div>strace -o &lt;outputfile&gt; -ff -v -p &lt;pid&gt;</div><div><br></div><div>attach both logs and strace. Let&#39;s trace through what syscalls on socket return and then decide whether to inspect tcpdump or not. If you don&#39;t want to repeat tests again, please capture tcpdump too (on both ends of connection) and send them to us.<br></div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">- a co-worker would be here faster than I could check<br>
the logs if the connection to home would be broken ;-)<br>
The three gluster machines are due to this problem reduced to only<br>
testing so there is nothing else running.<br>
<span class=""><br>
<br>
&gt; <br>
&gt;     Cheers<br>
&gt;     Richard<br>
&gt; <br>
&gt;     On 08/30/2018 02:40 PM, Raghavendra Gowdappa wrote:<br>
&gt;     &gt; Normally client logs will give a clue on why the disconnections are<br>
&gt;     &gt; happening (ping-timeout, wrong port etc). Can you look into client<br>
&gt;     &gt; logs to figure out what&#39;s happening? If you can&#39;t find anything, can<br>
&gt;     &gt; you send across client logs?<br>
&gt;     &gt; <br>
&gt;     &gt; On Wed, Aug 29, 2018 at 6:11 PM, Richard Neuboeck<br>
&gt;     &gt; &lt;<a href="mailto:hawk@tbi.univie.ac.at">hawk@tbi.univie.ac.at</a> &lt;mailto:<a href="mailto:hawk@tbi.univie.ac.at">hawk@tbi.univie.ac.at</a>&gt;<br>
</span>&gt;     &lt;mailto:<a href="mailto:hawk@tbi.univie.ac.at">hawk@tbi.univie.ac.at</a> &lt;mailto:<a href="mailto:hawk@tbi.univie.ac.at">hawk@tbi.univie.ac.at</a>&gt;<wbr>&gt;&gt;<br>
<div><div class="h5">&gt;     wrote:<br>
&gt;     &gt;<br>
&gt;     &gt;     Hi Gluster Community,<br>
&gt;     &gt;<br>
&gt;     &gt;     I have problems with a glusterfs &#39;Transport endpoint not<br>
&gt;     connected&#39;<br>
&gt;     &gt;     connection abort during file transfers that I can<br>
&gt;     replicate (all the<br>
&gt;     &gt;     time now) but not pinpoint as to why this is happening.<br>
&gt;     &gt;<br>
&gt;     &gt;     The volume is set up in replica 3 mode and accessed with<br>
&gt;     the fuse<br>
&gt;     &gt;     gluster client. Both client and server are running CentOS<br>
&gt;     and the<br>
&gt;     &gt;     supplied 3.12.11 version of gluster.<br>
&gt;     &gt;<br>
&gt;     &gt;     The connection abort happens at different times during<br>
&gt;     rsync but<br>
&gt;     &gt;     occurs every time I try to sync all our files (1.1TB) to<br>
&gt;     the empty<br>
&gt;     &gt;     volume.<br>
&gt;     &gt;<br>
&gt;     &gt;     Client and server side I don&#39;t find errors in the gluster<br>
&gt;     log files.<br>
&gt;     &gt;     rsync logs the obvious transfer problem. The only log that<br>
&gt;     shows<br>
&gt;     &gt;     anything related is the server brick log which states that the<br>
&gt;     &gt;     connection is shutting down:<br>
&gt;     &gt;<br>
&gt;     &gt;     [2018-08-18 22:40:35.502510] I [MSGID: 115036]<br>
&gt;     &gt;     [server.c:527:server_rpc_<wbr>notify] 0-home-server: disconnecting<br>
&gt;     &gt;     connection from<br>
&gt;     &gt;     brax-110405-2018/08/16-08:36:<wbr>28:575972-home-client-0-0-0<br>
&gt;     &gt;     [2018-08-18 22:40:35.502620] W<br>
&gt;     &gt;     [inodelk.c:499:pl_inodelk_<wbr>log_cleanup] 0-home-server:<br>
&gt;     releasing lock<br>
&gt;     &gt;     on eaeb0398-fefd-486d-84a7-<wbr>f13744d1cf10 held by<br>
&gt;     &gt;     {client=0x7f83ec0b3ce0, pid=110423 lk-owner=d0fd5ffb427f0000}<br>
&gt;     &gt;     [2018-08-18 22:40:35.502692] W<br>
&gt;     &gt;     [entrylk.c:864:pl_entrylk_<wbr>log_cleanup] 0-home-server:<br>
&gt;     releasing lock<br>
&gt;     &gt;     on faa93f7b-6c46-4251-b2b2-<wbr>abcd2f2613e1 held by<br>
&gt;     &gt;     {client=0x7f83ec0b3ce0, pid=110423 lk-owner=703dd4cc407f0000}<br>
&gt;     &gt;     [2018-08-18 22:40:35.502719] W<br>
&gt;     &gt;     [entrylk.c:864:pl_entrylk_<wbr>log_cleanup] 0-home-server:<br>
&gt;     releasing lock<br>
&gt;     &gt;     on faa93f7b-6c46-4251-b2b2-<wbr>abcd2f2613e1 held by<br>
&gt;     &gt;     {client=0x7f83ec0b3ce0, pid=110423 lk-owner=703dd4cc407f0000}<br>
&gt;     &gt;     [2018-08-18 22:40:35.505950] I [MSGID: 101055]<br>
&gt;     &gt;     [client_t.c:443:gf_client_<wbr>unref] 0-home-server: Shutting down<br>
&gt;     &gt;     connection<br>
&gt;     brax-110405-2018/08/16-08:36:<wbr>28:575972-home-client-0-0-0<br>
&gt;     &gt;<br>
&gt;     &gt;     Since I&#39;m running another replica 3 setup for oVirt for a<br>
&gt;     long time<br>
&gt;     &gt;     now which is completely stable I thought I made a mistake<br>
&gt;     setting<br>
&gt;     &gt;     different options at first. However even when I reset<br>
&gt;     those options<br>
&gt;     &gt;     I&#39;m able to reproduce the connection problem.<br>
&gt;     &gt;<br>
&gt;     &gt;     The unoptimized volume setup looks like this:<br>
&gt;     &gt;<br>
&gt;     &gt;     Volume Name: home<br>
&gt;     &gt;     Type: Replicate<br>
&gt;     &gt;     Volume ID: c92fa4cc-4a26-41ff-8c70-<wbr>1dd07f733ac8<br>
&gt;     &gt;     Status: Started<br>
&gt;     &gt;     Snapshot Count: 0<br>
&gt;     &gt;     Number of Bricks: 1 x 3 = 3<br>
&gt;     &gt;     Transport-type: tcp<br>
&gt;     &gt;     Bricks:<br>
&gt;     &gt;     Brick1: sphere-four:/srv/gluster_home/<wbr>brick<br>
&gt;     &gt;     Brick2: sphere-five:/srv/gluster_home/<wbr>brick<br>
&gt;     &gt;     Brick3: sphere-six:/srv/gluster_home/<wbr>brick<br>
&gt;     &gt;     Options Reconfigured:<br>
&gt;     &gt;     nfs.disable: on<br>
&gt;     &gt;     transport.address-family: inet<br>
&gt;     &gt;     cluster.quorum-type: auto<br>
&gt;     &gt;     cluster.server-quorum-type: server<br>
&gt;     &gt;     cluster.server-quorum-ratio: 50%<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;     The following additional options were used before:<br>
&gt;     &gt;<br>
&gt;     &gt;     performance.cache-size: 5GB<br>
&gt;     &gt;     client.event-threads: 4<br>
&gt;     &gt;     server.event-threads: 4<br>
&gt;     &gt;     cluster.lookup-optimize: on<br>
&gt;     &gt;     features.cache-invalidation: on<br>
&gt;     &gt;     performance.stat-prefetch: on<br>
&gt;     &gt;     performance.cache-<wbr>invalidation: on<br>
&gt;     &gt;     network.inode-lru-limit: 50000<br>
&gt;     &gt;     features.cache-invalidation-<wbr>timeout: 600<br>
&gt;     &gt;     performance.md-cache-timeout: 600<br>
&gt;     &gt;     performance.parallel-readdir: on<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;     In this case the gluster servers and also the client is<br>
&gt;     using a<br>
&gt;     &gt;     bonded network device running in adaptive load balancing mode.<br>
&gt;     &gt;<br>
&gt;     &gt;     I&#39;ve tried using the debug option for the client mount.<br>
&gt;     But except<br>
&gt;     &gt;     for a ~0.5TB log file I didn&#39;t get information that seems<br>
&gt;     &gt;     helpful to me.<br>
&gt;     &gt;<br>
&gt;     &gt;     Transferring just a couple of GB works without problems.<br>
&gt;     &gt;<br>
&gt;     &gt;     It may very well be that I&#39;m already blind to the obvious<br>
&gt;     but after<br>
&gt;     &gt;     many long running tests I can&#39;t find the crux in the setup.<br>
&gt;     &gt;<br>
&gt;     &gt;     Does anyone have an idea as how to approach this problem<br>
&gt;     in a way<br>
&gt;     &gt;     that sheds some useful information?<br>
&gt;     &gt;<br>
&gt;     &gt;     Any help is highly appreciated!<br>
&gt;     &gt;     Cheers<br>
&gt;     &gt;     Richard<br>
&gt;     &gt;<br>
&gt;     &gt;     --<br>
&gt;     &gt;     /dev/null<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;     _____________________________<wbr>__________________<br>
&gt;     &gt;     Gluster-users mailing list<br>
&gt;     &gt;     <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
&gt;     &lt;mailto:<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.<wbr>org</a>&gt;<br>
</div></div>&gt;     &lt;mailto:<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.<wbr>org</a><br>
<div class="HOEnZb"><div class="h5">&gt;     &lt;mailto:<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.<wbr>org</a>&gt;&gt;<br>
&gt;     &gt;     <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
&gt;     &lt;<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><wbr>&gt;<br>
&gt;     &gt;     &lt;<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
&gt;     &lt;<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><wbr>&gt;&gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt; <br>
&gt; <br>
&gt;     -- <br>
&gt;     /dev/null<br>
&gt; <br>
&gt; <br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">-- <br>
/dev/null<br>
<br>
</font></span></blockquote></div><br></div></div>