[Bugs] [Bug 1508283] stale brick processes getting created and volume status shows brick as down (pkill glusterfsd glusterfs , glusterd restart)

bugzilla at redhat.com bugzilla at redhat.com
Wed Nov 1 03:58:23 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1508283



--- Comment #1 from Atin Mukherjee <amukherj at redhat.com> ---
On a brick multiplexing 3 node cluster setup, having 12 1 X 3 volumes and
restarting all the gluster processes leaves up with some of brick status
showing offline.

Steps to Reproduce:
1.brick mux enabled, max brick per process set to 3
2.had about 12 volumes,about 10 were 1x3 and 2 were 2x2 =>in total 17 bricks
per node
3.did a pkill glusterfsd, glusterfs and service glusterd stop
4. did service glusterd start


Actual results:
==============
found about 11-18(different tries, different numbers) glusterfsd running, while
only 7 are supposed to be created

also volume status shows bricks as offline for some of them

however, no IO impact


We would hit this in upgrade path

--- Additional comment from Worker Ant on 2017-10-26 05:19:43 EDT ---

REVIEW: https://review.gluster.org/18577 (glusterd: fix brick restart
parallelism) posted (#1) for review on master by Atin Mukherjee
(amukherj at redhat.com)

--- Additional comment from Worker Ant on 2017-10-26 09:12:47 EDT ---

REVIEW: https://review.gluster.org/18577 (glusterd: fix brick restart
parallelism) posted (#2) for review on master by Atin Mukherjee
(amukherj at redhat.com)

--- Additional comment from Worker Ant on 2017-10-30 05:17:39 EDT ---

REVIEW: https://review.gluster.org/18577 (glusterd: fix brick restart
parallelism) posted (#3) for review on master by Atin Mukherjee
(amukherj at redhat.com)

--- Additional comment from Worker Ant on 2017-10-31 23:42:08 EDT ---

COMMIT: https://review.gluster.org/18577 committed in master by  

------------- glusterd: fix brick restart parallelism

glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.

In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.

As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.

Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1506513
Signed-off-by: Atin Mukherjee <amukherj at redhat.com>

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list