[Bugs] [Bug 1233333] glusterfs-resource-agents - volume - doesn't stop all processes

bugzilla at redhat.com bugzilla at redhat.com
Tue Jun 30 13:10:49 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1233333

JohnJerome <jeromep3000 at gmail.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|needinfo?(jeromep3000 at gmail |
                   |.com)                       |



--- Comment #2 from JohnJerome <jeromep3000 at gmail.com> ---
Here is the test with 'gluster volume status' results :


1) Pacemaker services FS, Volume and Daemon are started :

[root at centos71-2 ~]# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos71-2:/export/sdb1/brick         49152     0          Y       5633
Brick centos71-3:/export/sdb1/brick         49152     0          Y       49180
NFS Server on localhost                     2049      0          Y       5618
Self-heal Daemon on localhost               N/A       N/A        Y       5626
NFS Server on centos71-3                    2049      0          Y       49169
Self-heal Daemon on centos71-3              N/A       N/A        Y       49179

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks



2) We stop the FS service :

[root at centos71-2 ~]# pcs resource disable gluster_fs

[root at centos71-2 ~]# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos71-2:/export/sdb1/brick         49152     0          Y       5633
Brick centos71-3:/export/sdb1/brick         49152     0          Y       49180
NFS Server on localhost                     2049      0          Y       5618
Self-heal Daemon on localhost               N/A       N/A        Y       5626
NFS Server on centos71-3                    2049      0          Y       49169
Self-heal Daemon on centos71-3              N/A       N/A        Y       49179

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks



3) We stop the volume service :

[root at centos71-2 ~]# pcs resource disable gluster_volume

[root at centos71-2 ~]# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos71-2:/export/sdb1/brick         N/A       N/A        N       N/A
Brick centos71-3:/export/sdb1/brick         N/A       N/A        N       N/A
NFS Server on localhost                     2049      0          Y       5618
Self-heal Daemon on localhost               N/A       N/A        Y       5626
NFS Server on centos71-3                    2049      0          Y       49169
Self-heal Daemon on centos71-3              N/A       N/A        Y       49179

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks



4) We stop the daemon service :

[root at centos71-2 ~]# pcs resource disable gluster_d

[root at centos71-2 ~]# gluster volume status
Connection failed. Please check if gluster daemon is operational.



5) We check the gluster processes left :

[root at centos71-2 ~]# ps -edf|grep -i glusterfs
root      5618     1  0 14:47 ?        00:00:00 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l
/var/log/glusterfs/nfs.log -S
/var/run/gluster/912c297362be7dc78c27d4b7703d516e.socket
root      5626     1  0 14:47 ?        00:00:00 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/7c2c0bb36dfa97e1b4561df9663f5591.socket --xlator-option
*replicate*.node-uuid=5bfcd8ce-2935-4ea5-b73f-8a217b869fa2
root      6212  1458  0 14:50 pts/0    00:00:00 grep --color=auto -i glusterfs





I think the 'disable' action from the RA 'ocf:glusterfs:volume' should behave
the same way than the command 'gluster volume stop gv0' and not just kill the
main process.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list