[Bugs] [Bug 1790208] When the network of the second server is disconnected, applications on the first hang for duration of ping.timeout

bugzilla at redhat.com bugzilla at redhat.com
Mon Jan 13 11:31:28 UTC 2020


https://bugzilla.redhat.com/show_bug.cgi?id=1790208

Sanju <srakonde at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |CLOSED
         Resolution|---                         |NOTABUG
        Last Closed|                            |2020-01-13 11:31:28



--- Comment #3 from Sanju <srakonde at redhat.com> ---
Hi,

This is the expected bahaviour.

A comment below from Raghavendra G explains it:
"
Maximum latency of a single fop from application/kernel during a single
ungraceful shutdown (hard reboot/ethernet cable pull/hard power down etc) of a
hyperconverged node (which has a brick and client of the same volume) is
dependent on following things:

1. Time required for client to fail the operations pending on the rebooted
brick. These operations can include lock and non-lock operations like
(f)inodelk, write, lookup, (f)stat etc. Since this requires client to identify
the unresponsive/dead brick it is bound by (2 * network.ping-timeout).

2. Time required for client to acquire a lock on an healthy brick (as clients
can be doing transacations in afr). Note that the lock request could be
conflicting with a lock already granted to the dead client on rebooted node.
So, the lock request from healthy client to a healthy brick cannot proceed till
the stale lock from dead client is cleaned up. This means the healthy brick
needs to identify that client is dead. A brick can identify a client connected
to it is dead using the combination of (tcp-user-timeout and keepalive) tunings
on brick/server. There are quite a few scenarios in this case:
   2a. Healthy brick never writes a response to dead client. In this case
tcp-keepalive tunings on server ((server.keepalive-time +
server.keepalive-interval * server.keepalive-count) seconds after last
communication with dead client) bounds the maximum time required for brick to
cleanup stale locks from dead client. server.tcp-user-timeout has no role in
this case
   2b. Healthy brick writes a response (maybe one of requests dead-client sent
before it died) to socket. Note that writing a response to socket doesn't
necessarily mean the dead-client read the response. 
         2b.i healthy brick tries to write a response after keepalive timer has
expired since its last communication with dead client(In reality it can't as
keepalive timer expiry would close the connection). In this case since
keepalive timer has already closed the connection, maximum time for brick to
identify dead client is bound by server.keepalive tunings
         2b.ii healthy brick writes a response to socket immediately after last
communication with dead-client (i.e., last acked communication with dead
client). In this case healthy brick terminates connection to dead-client in
server.tcp-user-timeout seconds since last successful communication with dead
client.
         2b.iii healthy brick writes a response before keepalive timer has
expired since its last communication with dead client(case explained by comment
#140). Where response is written after keepalive is triggered but before it
expired. In this case, tcp-keepalive timer is stopped and tcp-user-timeout
timer is started. So, the healthy brick can identify the dead client at a
maximum of (server.tcp-user-timeout + server.keepalive) seconds after last
communication with dead client

Note that 1 and 2 can happen serially based on different transactions done by
afr.

So the worst case/maximum latency of a fop from application is bounded by (2 *
network.ping-timeout + server.tcp-user-timeout + (server.keepalive-time +
server.keepalive-interval * server.keepalive-count))
"

Since, this is expected, closing it as not a bug.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the Bugs mailing list