[Bugs] [Bug 1233333] New: glusterfs-resource-agents - volume - doesn't stop all processes

bugzilla at redhat.com bugzilla at redhat.com
Thu Jun 18 16:19:02 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1233333

            Bug ID: 1233333
           Summary: glusterfs-resource-agents - volume - doesn't stop all
                    processes
           Product: GlusterFS
           Version: 3.7.1
         Component: unclassified
          Severity: medium
          Assignee: bugs at gluster.org
          Reporter: jeromep3000 at gmail.com
                CC: bugs at gluster.org, gluster-bugs at redhat.com



Description of problem:
With Pacemaker/Corosync/pcs/glusterfs
When enabling a resource with the RA 'ocf:glusterfs:volume' three processes are
created.
But when we disable the resource, only one process is killed.

Version-Release number of selected component (if applicable):
glusterfs-resource-agents-3.7.1-1.el7.noarch.rpm

How reproducible:
Everytime

Steps to Reproduce:
1. Create the resource (all prerequisites are OK, ie: the cluster is
operational, the FS has been tested without Pacemaker, the resource glustered
is created)
pcs resource create gluster_volume ocf:glusterfs:volume volname='gv0' op
monitor interval=60s

2. Enable
pcs resource enable gluster_volume

3. Verify processes
# ps -edf|grep -i glusterfs
root     24939     1  0 17:55 ?        00:00:00 /usr/sbin/glusterfsd -s
centos71-2 --volfile-id gv0.centos71-2.export-sdb1-brick -p
/var/lib/glusterd/vols/gv0/run/centos71-2-export-sdb1-brick.pid -S
/var/run/gluster/33545c44468ba9f9288b2ebb4c6a1bba.socket --brick-name
/export/sdb1/brick -l /var/log/glusterfs/bricks/export-sdb1-brick.log
--xlator-option *-posix.glusterd-uuid=5bfcd8ce-2935-4ea5-b73f-8a217b869fa2
--brick-port 49152 --xlator-option gv0-server.listen-port=49152
root     24958     1  0 17:55 ?        00:00:00 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l
/var/log/glusterfs/nfs.log -S /var/run/gl
ster/912c297362be7dc78c27d4b7703d516e.socket
root     24965     1  0 17:55 ?        00:00:00 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/7c2c0bb36dfa97e1b4561df9663f5591.socket --xlator-option
*replicate*.node-uuid=5bfcd8ce-2935-4ea5-b73f-8a217b869fa2

4. Disable
pcs resource disable gluster_volume

5. Verify processes
# ps -edf|grep -i glusterfs

Actual results:
root     24958     1  0 17:55 ?        00:00:00 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l
/var/log/glusterfs/nfs.log -S /var/run/gl
ster/912c297362be7dc78c27d4b7703d516e.socket
root     24965     1  0 17:55 ?        00:00:00 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/7c2c0bb36dfa97e1b4561df9663f5591.socket --xlator-option
*replicate*.node-uuid=5bfcd8ce-2935-4ea5-b73f-8a217b869fa2

Expected results:
No glusterfs processes

Additional info:

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list