[Bugs] [Bug 1698131] multiple glusterfsd processes being launched for the same brick, causing transport endpoint not connected

bugzilla at redhat.com bugzilla at redhat.com
Mon Apr 29 03:28:32 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1698131

Atin Mukherjee <amukherj at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|ASSIGNED                    |CLOSED
         Resolution|---                         |CURRENTRELEASE
        Last Closed|                            |2019-04-29 03:28:32



--- Comment #5 from Atin Mukherjee <amukherj at redhat.com> ---
>From glusterfs/glusterd.log-20190407 I can see the following:

[2019-04-02 22:03:45.520037] I [glusterd-utils.c:6301:glusterd_brick_start]
0-management: starting a fresh brick process for brick /v0/bricks/gv0           
[2019-04-02 22:03:45.522039] I [rpc-clnt.c:1000:rpc_clnt_connection_init]
0-management: setting frame-timeout to 600
[2019-04-02 22:03:45.586328] C [MSGID: 106003]
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management:
Server quorum regained for volume gvOvirt. Starting local bricks.
[2019-04-02 22:03:45.586480] I [glusterd-utils.c:6214:glusterd_brick_start]
0-management: discovered already-running brick /v0/gbOvirt/b0
[2019-04-02 22:03:45.586495] I [MSGID: 106142]
[glusterd-pmap.c:290:pmap_registry_bind] 0-pmap: adding brick /v0/gbOvirt/b0 on
port 49157 
[2019-04-02 22:03:45.586519] I [rpc-clnt.c:1000:rpc_clnt_connection_init]
0-management: setting frame-timeout to 600
[2019-04-02 22:03:45.662116] E [MSGID: 101012]
[common-utils.c:4075:gf_is_service_running] 0-: Unable to read pidfile:
/var/run/gluster/vols/gv0/boneyard-san-v0-bricks-gv0.pid
[2019-04-02 22:03:45.662164] I [glusterd-utils.c:6301:glusterd_brick_start]
0-management: starting a fresh brick process for brick /v0/bricks/gv0

Which indicates that we attempted to start two processes for the same brick but
this was with glusterfs-5.5 version which doesn't have the fix as mentioned in
comment 2.

Post this cluster has been upgraded to 6.0, I don't see such event. So this is
already fixed and I am closing the bug.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the Bugs mailing list