[Gluster-users] Duplicated brick processes after restart of glusterd
David Spisla
spisla80 at gmail.com
Mon Jun 17 06:50:27 UTC 2019
Hello Atin,
thank you for the clarification.
Am Fr., 14. Juni 2019 um 19:03 Uhr schrieb Atin Mukherjee <
amukherj at redhat.com>:
> Please see https://bugzilla.redhat.com/show_bug.cgi?id=1696147 which is
> fixed in 5.6 . Although a race, I believe you're hitting this. Although the
> title of the bug reflects it to be shd + brick multiplexing combo, but it's
> applicable for bricks too.
>
> On Fri, Jun 14, 2019 at 2:07 PM David Spisla <spisla80 at gmail.com> wrote:
>
>> Dear Gluster Community,
>>
>> this morning I had an interesting observation. On my 2 Node Gluster v5.5
>> System with 3 Replica1 volumes (volume1, volume2, test) I had duplicated
>> brick processes (See output of ps aux in attached file
>> duplicate_bricks.txt) for each of the volumes. Additionally there is a
>> fs-ss volume which I use instead of gluster_shared_storage but this volume
>> was not effected.
>>
>> After doing some research I found a hint in glusterd.log . It seems to be
>> that after a restart glusterd couldn't found the pid files for the freshly
>> created brick processes and create new brick processes. One can see in the
>> brick logs that for all the volumes that two brick processes were created
>> just one after another.
>>
>> Result: Two brick processes for each of the volumes volume1, volume2 and
>> test.
>> "gluster vo status" shows that the pid number was mapped to the wrong
>> port number for hydmedia and impax
>>
>> But beside of that the volume was working correctly. I resolve that issue
>> with a workaround. Kill all brick processes and restart glusterd. After
>> that everything is fine.
>>
>> Is this a bug in glusterd? You can find all relevant informations
>> attached below
>>
>> Regards
>> David Spisla
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190617/c17cbf59/attachment.html>
More information about the Gluster-users
mailing list