[Bugs] [Bug 1472417] New: No clear method to multiplex all bricks to one process( glusterfsd) with cluster.max-bricks-per-process option
bugzilla at redhat.com
bugzilla at redhat.com
Tue Jul 18 16:35:41 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1472417
Bug ID: 1472417
Summary: No clear method to multiplex all bricks to one
process(glusterfsd) with
cluster.max-bricks-per-process option
Product: GlusterFS
Version: mainline
Component: glusterd
Severity: high
Assignee: bugs at gluster.org
Reporter: sbairagy at redhat.com
CC: amukherj at redhat.com, bmekala at redhat.com,
bugs at gluster.org, nchilaka at redhat.com,
rhs-bugs at redhat.com, storage-qa-internal at redhat.com,
vbellur at redhat.com
Depends On: 1472289
+++ This bug was initially created as a clone of Bug #1472289 +++
Description of problem:
========================
With a new option "cluster.max-bricks-per-process" we can now set a limit to
the number of bricks to be mutliplexed to one glusterfsd pid
However, once I set the value to say any value n (where n>1), then if every n
bricks multiplex to 1 pid, and the next n to the next pid ,etc
However , at later point of time, if for multiple reasons(one reason being, i
have scaled down my number of volumes) and I want all the bricks to be muxed
only to 1 glusterfsd, there is not straight forward way
Following are the problems:
1)by default the value is 1 , however in effect it means max (ie all bricks run
on only on fsd)
2)now once set to some value n where n>1, we cannot later revert to a setting
where all bricks mux to only one fsd due to below
a) now setting cluster.max-bricks-per-process=1 is resulting in all bricks
spawning new fsd(breaking brick mux)
b)setting to zero, also has same effect as 1
Version-Release number of selected component (if applicable):
====================
3.8.4-34
Steps to Reproduce:
1.create 10 volumes, don't start it
2.enable mux
3.start all 10 vols
4. all bricks take same glusterfsd
5. now set cluster.max-bricks-per-process to 5
6. create another 10 vols and start them
7. now first 5 new vols take a new fsd and the remaining 5 take the next fsds
7. now if i want to make all bricks run on same fsd, then i cannot revert
as setting cluster.max-bricks-per-process to 1/0 is breaking brick mux feature
Actual results:
now if i want to make all bricks run on same fsd, then i cannot revert
as setting cluster.max-bricks-per-process to 1/0 is breaking brick mux feature
Expected results:
==================
define a integer value (should be 0 or 1 ) to make all bricks to run on same
fsd
Additional info:
--- Additional comment from Atin Mukherjee on 2017-07-18 07:56:10 EDT ---
We have a plan to make the default to 0 instead of 1 which would ensure that
once we fall back to default with brick mux enabled all the bricks get attached
to a single process. However we'd need to ensure that volumes are restarted to
have this into effect.
@Samikshan - can you please send an upstream patch?
--- Additional comment from nchilaka on 2017-07-18 08:00:24 EDT ---
(In reply to Atin Mukherjee from comment #1)
> We have a plan to make the default to 0 instead of 1 which would ensure that
> once we fall back to default with brick mux enabled all the bricks get
> attached to a single process. However we'd need to ensure that volumes are
> restarted to have this into effect.
>
> @Samikshan - can you please send an upstream patch?
Completely fine with the restart requirement.
One more question, if we tag 0 to default brick mux feature, then what about 1.
It has no importance. It more of breaks brick mux feature.
It may be better to set both 0 and 1 to the default brick mux feature
--- Additional comment from Atin Mukherjee on 2017-07-18 09:33:39 EDT ---
Having both 0 and 1 as default value doesn't make any sense to me. What we
could do at best is have 0 as default and CLI doesn't allow this option to be
configured with value as 1. Does it make sense?
--- Additional comment from nchilaka on 2017-07-18 09:38:26 EDT ---
(In reply to Atin Mukherjee from comment #3)
> Having both 0 and 1 as default value doesn't make any sense to me. What we
> could do at best is have 0 as default and CLI doesn't allow this option to
> be configured with value as 1. Does it make sense?
makes sense
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1472289
[Bug 1472289] No clear method to multiplex all bricks to one
process(glusterfsd) with cluster.max-bricks-per-process option
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list