[Bugs] [Bug 1222409] nfs-ganesha: HA failover happens but I/O does not move ahead when volume has two mounts and I/O going on both mounts

bugzilla at redhat.com bugzilla at redhat.com
Tue May 19 04:16:43 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1222409



--- Comment #2 from Saurabh <saujain at redhat.com> ---
Well, I tried to give a re-run to this test with a slight modification, earlier
I had two different directories and I was executing iozone in both of them
separately from different mounts as "iozone -a". Now, the modification is that
I am giving different file names for iozone as option. 

The result with this modification is that iozone has on both mount-point has
finished but nfs-ganesha process has got killed on two other nodes by itself,
which is also a cause of worry.

The present pcs status,

[root at nfs1 ~]# pcs status
Cluster name: ganesha-ha-360
Last updated: Tue May 19 09:39:54 2015
Last change: Mon May 18 20:29:02 2015
Stack: cman
Current DC: nfs1 - partition with quorum
Version: 1.1.11-97629de
4 Nodes configured
19 Resources configured


Online: [ nfs1 nfs2 nfs3 nfs4 ]

Full list of resources:

 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ nfs1 nfs2 nfs3 nfs4 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ nfs1 nfs2 nfs3 nfs4 ]
 nfs1-cluster_ip-1    (ocf::heartbeat:IPaddr):    Started nfs2 
 nfs1-trigger_ip-1    (ocf::heartbeat:Dummy):    Started nfs2 
 nfs2-cluster_ip-1    (ocf::heartbeat:IPaddr):    Started nfs2 
 nfs2-trigger_ip-1    (ocf::heartbeat:Dummy):    Started nfs2 
 nfs3-cluster_ip-1    (ocf::heartbeat:IPaddr):    Started nfs2 
 nfs3-trigger_ip-1    (ocf::heartbeat:Dummy):    Started nfs2 
 nfs4-cluster_ip-1    (ocf::heartbeat:IPaddr):    Started nfs2 
 nfs4-trigger_ip-1    (ocf::heartbeat:Dummy):    Started nfs2 
 nfs1-dead_ip-1    (ocf::heartbeat:Dummy):    Started nfs1 
 nfs4-dead_ip-1    (ocf::heartbeat:Dummy):    Started nfs4 
 nfs3-dead_ip-1    (ocf::heartbeat:Dummy):    Started nfs3 


nfs-ganesha process status on four nodes,
nfs1

stopped using "kill -s TERM <pid>"
---
nfs2
root     20620     1  0 May18 ?        00:00:50 /usr/bin/ganesha.nfsd -L
/var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT -p
/var/run/ganesha.nfsd.pid
---
nfs3
Got killed itself
---
nfs4
Got killed itself
---

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the Bugs mailing list