<div dir="ltr"><div><div>There is a patch on this [1]. Reviews from wider audience will be helpful, before we merge the patch.<br><br><a href="https://review.gluster.org/#/c/16731/">https://review.gluster.org/#/c/16731/</a><br><br></div>regards,<br></div>Raghavendra<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 11, 2017 at 4:19 PM, Milind Changire <span dir="ltr">&lt;<a href="mailto:mchangir@redhat.com" target="_blank">mchangir@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">+gluster-users<span class="HOEnZb"><font color="#888888"><br>
<br>
Milind</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
On 01/11/2017 03:21 PM, Milind Changire wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
The management connection uses network.ping-timeout to time out and<br>
retry connection to a different server if the existing connection<br>
end-point is unreachable from the client.<br>
Due to the nature of the parameters involved in the TCP/IP network<br>
stack, it becomes imperative to control the other network connections<br>
using the socket level tunables:<br>
* SO_KEEPALIVE<br>
* TCP_KEEPIDLE<br>
* TCP_KEEPINTVL<br>
* TCP_KEEPCNT<br>
<br>
So, I&#39;d like to decouple the network.ping-timeout and<br>
transport.tcp-user-timeout since they are tunables for different<br>
aspects of gluster application. network-ping-timeout monitors the<br>
brick/node level responsiveness and transport.tcp-user-timeout is one<br>
of the attributes that is used to manage the state of the socket.<br>
<br>
Saying so, we could do away with network.ping-timeout altogether and<br>
stick with transport.tcp-user-timeout for types of sockets. It becomes<br>
increasingly difficult to work with different tunables across gluster.<br>
<br>
I believe, there have not been many cases in which the community has<br>
found the existing defaults for socket timeout unusable. So we could<br>
stick with the system defaults and add the following socket level<br>
tunables and make them open for configuration:<br>
* client.tcp-user-timeout<br>
     which sets transport.tcp-user-timeout<br>
* client.keepalive-time<br>
     which sets transport.socket.keepalive-tim<wbr>e<br>
* client.keepalive-interval<br>
     which sets transport.socket.keepalive-int<wbr>erval<br>
* client.keepalive-count<br>
     which sets transport.socket.keepalive-cou<wbr>nt<br>
* server.tcp-user-timeout<br>
     which sets transport.tcp-user-timeout<br>
* server.keepalive-time<br>
     which sets transport.socket.keepalive-tim<wbr>e<br>
* server.keepalive-interval<br>
     which sets transport.socket.keepalive-int<wbr>erval<br>
* server.keepalive-count<br>
     which sets transport.socket.keepalive-cou<wbr>nt<br>
<br>
However, these settings would effect all sockets in gluster.<br>
In cases where aggressive timeouts are needed, the community can find<br>
gluster options which have 1:1 mapping with socket level options as<br>
documented in tcp(7).<br>
<br>
Please share your thoughts about the risks or effectiveness of the<br>
decoupling.<br>
<br>
</blockquote></div></div><div class="HOEnZb"><div class="h5">
______________________________<wbr>_________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman<wbr>/listinfo/gluster-devel</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Raghavendra G<br></div>
</div>