[Bugs] [Bug 1363722] New: nfs client I/O stuck post IP failover

bugzilla at redhat.com bugzilla at redhat.com
Wed Aug 3 12:25:03 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1363722

            Bug ID: 1363722
           Summary: nfs client I/O stuck post IP failover
           Product: GlusterFS
           Version: 3.8.2
         Component: common-ha
          Keywords: Triaged, ZStream
          Severity: medium
          Assignee: bugs at gluster.org
          Reporter: skoduri at redhat.com
                CC: akhakhar at redhat.com, bugs at gluster.org,
                    jthottan at redhat.com, kkeithle at redhat.com,
                    mzywusko at redhat.com, ndevos at redhat.com,
                    nlevinki at redhat.com, rnalakka at redhat.com,
                    sankarshan at redhat.com, skoduri at redhat.com,
                    storage-qa-internal at redhat.com
        Depends On: 1303037, 1354439, 1302545
            Blocks: 1278336, 1330218



+++ This bug was initially created as a clone of Bug #1354439 +++

+++ This bug was initially created as a clone of Bug #1278336 +++

Description of problem:

While testing nfs-ganesha HA IP failover/failback cases, we have noticed that
the client I/O gets stuck sometimes.

Version-Release number of selected component (if applicable):
RHGS 3.1

How reproducible:
Not always


Actual results:

Client I/O gets stuck

Expected results:

Client I/O should resume post IP failover.

Additional info:
I am attaching pkt trace taken from the client side. I see many TCP
re-transmission requests post failover. Need to debug that.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-11-05
05:03:57 EST ---

This bug is automatically being proposed for the current z-stream release of
Red Hat Gluster Storage 3 by setting the release flag 'rhgs‑3.1.z' to '?'. 

If this bug should be proposed for a different release, please manually change
the proposed release flag.

--- Additional comment from Soumya Koduri on 2015-11-05 05:12:58 EST ---

The I/O resumes post failback though. Shall attach pkt traces in both the
cases.

--- Additional comment from Soumya Koduri on 2015-11-05 05:13:23 EST ---

Setup details: 
dhcp3-238.gsslab.pnq.redhat.com, dhcp3-234.gsslab.pnq.redhat.com (root/root123)

--- Additional comment from Soumya Koduri on 2015-11-06 02:43:56 EST ---

Root cause'd the problem. I can now consistently reproduce this issue. This
problem happens during second consecutive fail-over of VIP to the same node- 

say 
* server1 has VIP1, server2 has VIP2
* client connected to VIP1/server1.
* Server1 has gone down, VIP1 moved to server2
* Client is now connected to VIP1/server2 
* Server1 comes back online. VIP1 moved back to server1
* Now suppose server1 goes down again, VIP1 is failed over back to server2.

Here is when the client I/O gets stuck. The issue is with the TCP connection
now being reset
by the server2 during VIP failback. Still finding out how/where to get the fix.
Shall update the bug.

--- Additional comment from Soumya Koduri on 2015-11-06 02:47:52 EST ---

The workaround for this issue is to restart the nfs-ganesha server on the
server2. That shall reset the TCP connections.

--- Additional comment from Soumya Koduri on 2015-11-06 04:42:47 EST ---

Correction to my comment#4 above. This issue seems to happening after couple of
failover and failback to the same node. Couple of times I have seen the node
which has taken over the VIP sending PSH ACK or SYN ACK packets when client
tries to re-establish TCP connection. But after couple of fail-over scenarios,
that doesn't happen.

--- Additional comment from Soumya Koduri on 2015-11-10 05:34:34 EST ---

Have posted question to few technical mailing list to understand TCP behaviour.
Meawhile as suggested by Niels, tried out pacemaker portblock resource agent to
tickle few invalid TCP packets from the server which forces client to reset its
connection and thus allowing I/O to continue.

Now need to check how we can plug in this new resouce agent into existing
scripts.

Meanwhile as a workaround, whenever the client seem to be stuck post failover,
create the below resource agent on the server machine hosting the VIP -

pcs resource create ganesha_portblock ocf:heartbeat:portblock protocol=tcp
portno=2049 action=unblock ip=VIP reset_local_on_unblock_stop=on
tickle_dir=/run/gluster/shared_storage/tickle_dir/

Post the I/O resume delete it -

pcs resource delete ganesha_portblock

--- Additional comment from Soumya Koduri on 2015-11-19 00:30:35 EST ---

We are checking with Networking experts internally on this peculiar TCP
behaviour.

mail thread:
http://post-office.corp.redhat.com/archives/tech-list/2015-November/msg00173.html

As mentioned in the https://bugzilla.redhat.com/show_bug.cgi?id=369991#c16 ,
this seems a well known issue with the repetitive failovers of NFS servers in
the cluster. CTDB uses TCP tickle ACKs as a workaround/to overcome this issue.
As mentioned in the above note, we shall try to use pacemaker portblock to
achieve the similar behaviour.
Note: this resource agent is not yet packaged in RHEL downstream. So it may
take sometime to package it separately. We shall discuss about the same with
Cluster-suite team and update.

--- Additional comment from Niels de Vos on 2016-01-27 06:44:56 EST ---

Soumya, please open a bug against the resource-agents package to get portblock
included.

--- Additional comment from Soumya Koduri on 2016-01-28 01:42:40 EST ---

Done. I have opened bug1302545

--- Additional comment from Jiffin on 2016-03-07 04:22:49 EST ---

fix for https://bugzilla.redhat.com/show_bug.cgi?id=1302545 got merged

--- Additional comment from Vijay Bellur on 2016-07-11 06:28:01 EDT ---

REVIEW: http://review.gluster.org/14878 (commn-HA: Add portblock resource
agents to tickle packets post failover(/back)) posted (#3) for review on master
by soumya k (skoduri at redhat.com)

--- Additional comment from Vijay Bellur on 2016-07-12 03:14:54 EDT ---

REVIEW: http://review.gluster.org/14878 (commn-HA: Add portblock resource
agents to tickle packets post failover(/back)) posted (#4) for review on master
by soumya k (skoduri at redhat.com)

--- Additional comment from Vijay Bellur on 2016-07-18 05:51:47 EDT ---

REVIEW: http://review.gluster.org/14878 (commn-HA: Add portblock RA to tickle
packets post failover(/back)) posted (#5) for review on master by soumya k
(skoduri at redhat.com)

--- Additional comment from Vijay Bellur on 2016-07-31 05:48:48 EDT ---

REVIEW: http://review.gluster.org/14878 (commn-HA: Add portblock RA to tickle
packets post failover(/back)) posted (#6) for review on master by soumya k
(skoduri at redhat.com)

--- Additional comment from Soumya Koduri on 2016-08-02 02:21 EDT ---

Script to continuously generate I/O on a v3 mount point.

--- Additional comment from Soumya Koduri on 2016-08-02 02:21 EDT ---

Script to do failovers and failback in a loop (for about 100 iterations)
between two servers.

--- Additional comment from Soumya Koduri on 2016-08-02 02:23 EDT ---

Test results with fix.

--- Additional comment from Soumya Koduri on 2016-08-02 02:53 EDT ---

Test results without fix applied.

--- Additional comment from Soumya Koduri on 2016-08-02 02:54:20 EDT ---

To verify the portblock RA introduced, below are the tests performed.

A 2-node nfs-ganesha HA setup is used.

On the client machine:
Attached 'continuous.sh' script is ran which continuously generates I/O on a v3
mount(since grace doesn't affect v3 clients) of VIPA configured on one of the
servers.

portblock_test.sh - 
This script triggeres failover & failback between two nodes for about 100
iterations. After VIP is successfully
failed-over/failed-back, there is a sleep of 10sec for the I/O to continue for
sometime.

That means if there is no I/O generated between two iterations, that resembles
I/O getting stuck 

As can be seen from the test results attached (test_results_withoutfix), I/O
got stuck in between few iterations without the fix 

Tue Aug 2 11:28:11 IST 2016
43.7 Tue Aug  2 11:27:27 IST 2016 - Loop4
Starting Failover from 10.70.43.7 Tue Aug  2 11:27:38 IST 2016 - Loop5
Completed Failover from 10.70.43.7 Tue Aug  2 11:28:00 IST 2016 - Loop5
Starting Failback to 10.70.43.7 Tue Aug  2 11:28:10 IST 2016 - Loop6
Tue Aug 2 11:28:11 IST 2016


But that wasn't the case with the fix applied (test_results_withfix)

--- Additional comment from Vijay Bellur on 2016-08-03 07:43:42 EDT ---

REVIEW: http://review.gluster.org/14878 (commn-HA: Add portblock RA to tickle
packets post failover(/back)) posted (#7) for review on master by soumya k
(skoduri at redhat.com)

--- Additional comment from Vijay Bellur on 2016-08-03 08:02:14 EDT ---

COMMIT: http://review.gluster.org/14878 committed in master by Niels de Vos
(ndevos at redhat.com) 
------
commit ea6a1ebe931e49464eb17205b94f5c87765cf696
Author: Soumya Koduri <skoduri at redhat.com>
Date:   Fri Jul 8 12:30:25 2016 +0530

    commn-HA: Add portblock RA to tickle packets post failover(/back)

    Portblock resource-agents are used to send tickle ACKs so as to
    reset the oustanding tcp connections. This can be used to reduce
    the time taken by the NFS clients to reconnect post IP
    failover/failback.

    Two new resource agents (nfs_block and nfs_unblock) of type
    ocf:portblock with action block & unblock are created for each
    Virtual-IP (cluster_ip-1). These resource agents along with cluster_ip-1
    RA are grouped in the order of block->IP->unblock and also the entire
    group maintains same colocation rules so that they reside on the same
    node at any given point of time.

    The contents of tickle_dir are of the following format -
    * A file is created for each of the VIPs used in the ganesha cluster.
    * Each of those files contain entries about clients connected
      as below:
    SourceIP:port_num       DestinationIP:port_num

    Hence when one server failsover, connections of the clients connected
    to other VIPs are not affected.

    Note: During testing I observed that tickle ACKs are sent during
    failback but not during failover, though I/O successfully
    resumed post failover.

    Also added a dependency on portblock RA for glusterfs-ganesha package
    as it may not be available (as part of resource-agents package) in
    all the distributions.

    Change-Id: Icad6169449535f210d9abe302c2a6971a0a96d6f
    BUG: 1354439
    Signed-off-by: Soumya Koduri <skoduri at redhat.com>
    Reviewed-on: http://review.gluster.org/14878
    NetBSD-regression: NetBSD Build System <jenkins at build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins at build.gluster.org>
    Reviewed-by: Kaleb KEITHLEY <kkeithle at redhat.com>
    Smoke: Gluster Build System <jenkins at build.gluster.org>
    Reviewed-by: Niels de Vos <ndevos at redhat.com>


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1278336
[Bug 1278336] nfs client I/O stuck post IP failover
https://bugzilla.redhat.com/show_bug.cgi?id=1302545
[Bug 1302545] Package pacemaker portblock resource-agent
https://bugzilla.redhat.com/show_bug.cgi?id=1303037
[Bug 1303037] portblock resource-agent
https://bugzilla.redhat.com/show_bug.cgi?id=1330218
[Bug 1330218] Shutting down I/O serving node, takes 15-20 mins for IO to
resume from failed over node.
https://bugzilla.redhat.com/show_bug.cgi?id=1354439
[Bug 1354439] nfs client I/O stuck post IP failover
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list