[Bugs] [Bug 1454865] [Brick Multiplexing] heal info shows the status of the bricks as " Transport endpoint is not connected" though bricks are up

bugzilla at redhat.com bugzilla at redhat.com
Tue May 23 15:27:26 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1454865

Atin Mukherjee <amukherj at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Keywords|Reopened                    |
             Blocks|1260779                     |1448833
         Depends On|1448833                     |
           Assignee|bugs at gluster.org            |amukherj at redhat.com



--- Comment #1 from Atin Mukherjee <amukherj at redhat.com> ---
Description of problem:
=======================
heal info command output shows the status of the bricks as  "Transport endpoint
is not connected" though bricks are up and running.

Version-Release number of selected component (if applicable):
mainline

How reproducible:
=================
always

Steps to Reproduce:
===================
1) Create a Distributed-Replicate volume and enable brick mux on it.
2) Start the volume and FUSE mount it on a client.
3) Set cluster.self-heal-daemon to off.
4) Create a 10 directory on the mount point.
5) Kill one brick of one of the replica sets in the volume and modify the
permissions of all directories.
6) Start volume with force option.
7) Kill the other brick in the same replica set and modify permissions of the
directory again.
8) Start volume with force option. Examine the output of `gluster volume heal
<vol-name> info' command on the server.

Actual results:
===============
heal info command output shows the status of the bricks as  "Transport endpoint
is not connected" though bricks are up and running.


RCA:

When we stop the volume GlusterD actually sends two terminate request to brick
process one during brick op phase and another during commit phase. Without
multiplexing, it wasn't causing any problem, because the process was supposed
to stop. But with multiplexing, it is just a detach which will be executed
twice. Now those two requests can be executed at the same time, if that happens
we may delete the graph entry twice as we are not taking any locks during the
link modification of graph in glusterfs_handle_detach.

So the linked list will me moved twice which is results a deletion of an
independent brick.


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1260779
[Bug 1260779] Value of `replica.split-brain-status' attribute of a
directory in metadata split-brain in a dist-rep volume reads that it is not
in split-brain
https://bugzilla.redhat.com/show_bug.cgi?id=1448833
[Bug 1448833] [Brick Multiplexing] heal info shows the status of the bricks
as  "Transport endpoint is not connected" though bricks are up
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list