[Bugs] [Bug 1434448] New: Brick Multiplexing: Volume status still shows the PID even after killing the process
bugzilla at redhat.com
bugzilla at redhat.com
Tue Mar 21 14:11:18 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1434448
Bug ID: 1434448
Summary: Brick Multiplexing:Volume status still shows the PID
even after killing the process
Product: GlusterFS
Version: 3.10
Component: core
Severity: medium
Assignee: bugs at gluster.org
Reporter: nchilaka at redhat.com
CC: bugs at gluster.org
Description of problem:
==================
After enabling brick multiplexing, I killed the brick process(which is
universal for that node for all bricks of all volumes) on one of the node.
I see that the process gets killed and all bricks show the online status and
port number as N or N/A
However it still shows the old PID of the killed process
This PID also should be shown as N
root at dhcp35-215 bricks]# gluster v status|grep 215
Before kill the brick process(grep'ing only for bricks in this local node)
Brick 10.70.35.215:/rhs/brick3/cross3 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick4/cross3 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick1/ecvol 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick2/ecvol 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick3/ecvol 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick4/ecvol 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick1/ecx 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick2/ecx 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick3/ecx 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick4/ecx 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick3/rep2 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick4/rep2 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick3/rep3 49152 0 Y 13072
Brick 10.70.35.215:/rhs/brick4/rep3 49152 0 Y 13072
[root at dhcp35-215 bricks]# kill -9 13072
[root at dhcp35-215 bricks]# gluster v status|grep 215
(after kill the brick process)
Brick 10.70.35.215:/rhs/brick3/cross3 N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick4/cross3 N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick1/ecvol N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick2/ecvol N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick3/ecvol N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick4/ecvol N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick1/ecx N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick2/ecx N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick3/ecx N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick4/ecx N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick3/rep2 N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick4/rep2 N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick3/rep3 N/A N/A N 13072
Brick 10.70.35.215:/rhs/brick4/rep3 N/A N/A N 13072
[root at dhcp35-215 bricks]# ps -ef|grep 13072
root 2258 21234 0 19:35 pts/0 00:00:00 grep --color=auto 13072
[root at dhcp35-215 bricks]#
Version-Release number of selected component (if applicable):
============
glusterfs-libs-3.10.0-1.el7.x86_64
glusterfs-api-3.10.0-1.el7.x86_64
glusterfs-rdma-3.10.0-1.el7.x86_64
glusterfs-3.10.0-1.el7.x86_64
python2-gluster-3.10.0-1.el7.x86_64
glusterfs-fuse-3.10.0-1.el7.x86_64
glusterfs-server-3.10.0-1.el7.x86_64
glusterfs-geo-replication-3.10.0-1.el7.x86_64
glusterfs-extra-xlators-3.10.0-1.el7.x86_64
glusterfs-client-xlators-3.10.0-1.el7.x86_64
glusterfs-cli-3.10.0-1.el7.x86_64
How reproducible:
=======
always
Steps to Reproduce:
1.enabled brick multiplexing feature
2.create a volume or multiple volume and start them
3.you can notice all bricks hosted on the same node will be having same PID
4. select a node and kill the PID
5. issue volume status
Actual results:
====
volume status still shows the PID against each brick even though the PID is
killed
Expected results:
================
PID must show as N/A
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list