[Bugs] [Bug 1427461] New: Bricks take up new ports upon volume restart after add-brick op with brick mux enabled
bugzilla at redhat.com
bugzilla at redhat.com
Tue Feb 28 09:32:23 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1427461
Bug ID: 1427461
Summary: Bricks take up new ports upon volume restart after
add-brick op with brick mux enabled
Product: GlusterFS
Version: 3.10
Component: glusterd
Keywords: Triaged
Severity: medium
Priority: medium
Assignee: bugs at gluster.org
Reporter: sbairagy at redhat.com
CC: amukherj at redhat.com, bugs at gluster.org,
jdarcy at redhat.com, sbairagy at redhat.com
Depends On: 1421590
+++ This bug was initially created as a clone of Bug #1421590 +++
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce and actual results:
Taking a 1 node cluster here to list down the steps to reproduce, but this can
be reproduced on multi-node cluster too.
1. Enable brick multiplexing
2. Create 1 volume with one brick
2. Start the volume and check volume status. The brick will be using port 49152
3. Add a brick to the volume and check vol status. Both bricks use 49152
4. Stop the volume and then start it.
5. Check volume status. Both bricks now use 49153.
6. If you restart the volume again and check the status, the bricks would now
use 49154. For every restart, the bricks take up the next port.
Expected results:
The bricks should use the ports being used upon restart and not take up a new
port.
--- Additional comment from Atin Mukherjee on 2017-02-13 03:48:38 EST ---
Samikshan - just to double check, is this issue not seen if brick mux is
disabled?
--- Additional comment from Samikshan Bairagya on 2017-02-13 04:20:47 EST ---
(In reply to Atin Mukherjee from comment #1)
> Samikshan - just to double check, is this issue not seen if brick mux is
> disabled?
No. I tested this with brick mux disabled. This issue wasn't seen.
--- Additional comment from Jeff Darcy on 2017-02-13 09:53:07 EST ---
We're likely to encounter many of these "grey area" bugs which are not
addressed by any existing requirements or tests. Since fixing them is already
likely to become a bottleneck, and manual testing is likely to make that even
worse, it would be very helpful if other developers could provide the missing
tests. Any suggestions for how best to do that?
--- Additional comment from Worker Ant on 2017-02-20 08:14:38 EST ---
REVIEW: https://review.gluster.org/16689 (core: Clean up pmap registry up
correctly on volume/brick stop) posted (#1) for review on master by Samikshan
Bairagya (samikshan at gmail.com)
--- Additional comment from Worker Ant on 2017-02-20 09:24:55 EST ---
REVIEW: https://review.gluster.org/16689 (core: Clean up pmap registry up
correctly on volume/brick stop) posted (#2) for review on master by Samikshan
Bairagya (samikshan at gmail.com)
--- Additional comment from Worker Ant on 2017-02-21 09:48:07 EST ---
REVIEW: https://review.gluster.org/16689 (core: Clean up pmap registry up
correctly on volume/brick stop) posted (#3) for review on master by Samikshan
Bairagya (samikshan at gmail.com)
--- Additional comment from Worker Ant on 2017-02-27 17:59:07 EST ---
COMMIT: https://review.gluster.org/16689 committed in master by Jeff Darcy
(jdarcy at redhat.com)
------
commit 1e3538baab7abc29ac329c78182b62558da56d98
Author: Samikshan Bairagya <samikshan at gmail.com>
Date: Mon Feb 20 18:35:01 2017 +0530
core: Clean up pmap registry up correctly on volume/brick stop
This commit changes the following:
1. In glusterfs_handle_terminate, send out individual pmap signout
requests to glusterd for every brick.
2. Add another parameter to glusterfs_mgmt_pmap_signout function to
pass the brickname that needs to be removed from the pmap registry.
3. Make sure pmap_registry_search doesn't break out from the loop
iterating over the list of bricks per port if the first brick entry
corresponding to a port is whitespaced out.
4. Make sure the pmap registry entries are removed for other
daemons like snapd.
Change-Id: I69949874435b02699e5708dab811777ccb297174
BUG: 1421590
Signed-off-by: Samikshan Bairagya <samikshan at gmail.com>
Reviewed-on: https://review.gluster.org/16689
Smoke: Gluster Build System <jenkins at build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins at build.gluster.org>
CentOS-regression: Gluster Build System <jenkins at build.gluster.org>
Reviewed-by: Gaurav Yadav <gyadav at redhat.com>
Reviewed-by: Jeff Darcy <jdarcy at redhat.com>
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1421590
[Bug 1421590] Bricks take up new ports upon volume restart after add-brick
op with brick mux enabled
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list