[Gluster-users] Duplicated brick processes after restart of glusterd
amukherj at redhat.com
Fri Jun 14 17:03:05 UTC 2019
Please see https://bugzilla.redhat.com/show_bug.cgi?id=1696147 which is
fixed in 5.6 . Although a race, I believe you're hitting this. Although the
title of the bug reflects it to be shd + brick multiplexing combo, but it's
applicable for bricks too.
On Fri, Jun 14, 2019 at 2:07 PM David Spisla <spisla80 at gmail.com> wrote:
> Dear Gluster Community,
> this morning I had an interesting observation. On my 2 Node Gluster v5.5
> System with 3 Replica1 volumes (volume1, volume2, test) I had duplicated
> brick processes (See output of ps aux in attached file
> duplicate_bricks.txt) for each of the volumes. Additionally there is a
> fs-ss volume which I use instead of gluster_shared_storage but this volume
> was not effected.
> After doing some research I found a hint in glusterd.log . It seems to be
> that after a restart glusterd couldn't found the pid files for the freshly
> created brick processes and create new brick processes. One can see in the
> brick logs that for all the volumes that two brick processes were created
> just one after another.
> Result: Two brick processes for each of the volumes volume1, volume2 and
> "gluster vo status" shows that the pid number was mapped to the wrong port
> number for hydmedia and impax
> But beside of that the volume was working correctly. I resolve that issue
> with a workaround. Kill all brick processes and restart glusterd. After
> that everything is fine.
> Is this a bug in glusterd? You can find all relevant informations attached
> David Spisla
> Gluster-users mailing list
> Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users