[Bugs] [Bug 1698131] multiple glusterfsd processes being launched for the same brick, causing transport endpoint not connected

bugzilla at redhat.com bugzilla at redhat.com
Sat Apr 13 01:58:38 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1698131

Darrell <budic at onholyground.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|needinfo?(budic at onholygroun |
                   |d.com)                      |



--- Comment #3 from Darrell <budic at onholyground.com> ---
While things were in the state I described above, peer status was normal, as it
is now:
[root at boneyard telsin]# gluster peer status
Number of Peers: 2

Hostname: ossuary-san
Uuid: 0ecbf953-681b-448f-9746-d1c1fe7a0978
State: Peer in Cluster (Connected)
Other names:
10.50.3.12

Hostname: necropolis-san
Uuid: 5d082bda-bb00-48d4-9f51-ea0995066c6f
State: Peer in Cluster (Connected)
Other names:
10.50.3.10

There's a 'gluster vol status gvOvirt' from the time there were multiple fsd
processes running in the original ticket. At the moment, everything is normal,
so I can't get you another while unusual things are happening. At the moment,
it looks like:

[root at boneyard telsin]# gluster vol status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick necropolis-san:/v0/bricks/gv0         49154     0          Y       10425
Brick boneyard-san:/v0/bricks/gv0           49152     0          Y       8504 
Brick ossuary-san:/v0/bricks/gv0            49152     0          Y       13563
Self-heal Daemon on localhost               N/A       N/A        Y       22864
Self-heal Daemon on ossuary-san             N/A       N/A        Y       5815 
Self-heal Daemon on necropolis-san          N/A       N/A        Y       13859

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: gvOvirt
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick boneyard-san:/v0/gbOvirt/b0           49153     0          Y       9108 
Brick necropolis-san:/v0/gbOvirt/b0         49155     0          Y       10510
Brick ossuary-san:/v0/gbOvirt/b0            49153     0          Y       13577
Self-heal Daemon on localhost               N/A       N/A        Y       22864
Self-heal Daemon on ossuary-san             N/A       N/A        Y       5815 
Self-heal Daemon on necropolis-san          N/A       N/A        Y       13859

Task Status of Volume gvOvirt
------------------------------------------------------------------------------
There are no active volume tasks

Also of note, it appears to have corrupted my Ovirt Hosted Engine VM.

Full logs are attached, hope it helps! Sorry about some of the large files, for
some reason this system wasn't rotating them properly until I did some cleanup.

I can take this cluster to 6.1 as soon as it appears in testing, or leave it a
bit longer and try restarting some volumes or rebooting to see if I can
recreate if it would help?

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the Bugs mailing list