[Gluster-users] [Errno 107] Transport endpoint is not connected

Olaf Buitelaar olaf.buitelaar at gmail.com
Thu Jan 30 16:47:21 UTC 2020


>
> Hi Strahil,
>

in fact i'm running more bricks per host, around 12 bricks per host.
Nonetheless the feature doesn't seem to work really for me, since it's
starting a separate glusterfsd processes for each brick anyway, actually
after a reboot or restart of glusterd, multiple glusterfsd for the same
brick. It's probably because of this line; "allowing compatible bricks to
use the same process and port", but not sure about that, as that would mean
my bricks aren't compatible for the multiplex feature. Also I'm not sure if
12 is considered "many", but i can always test again with multiplexing off,
if that's considered to be better in this case. Only really stopping and
starting the volumes is not really an option, so ideally restarting them
one-by-one would also be sufficient.

Since i ever so often encounter qcow corruptions, i moved from having
multiple smaller ones in a raid0. as if i have to deal with the corruption,
it's easier to have it as simple as possible, and also having to consider a
raid0 layer to rebuild, only complicates the recovery. Also the live
migration doesn't really work for these VM's anyway since they have so much
RAM, and we don't have enough bandwidth (only 10Gbit) to transfer that fast
enough. It's actually faster to shut the VM down, and start it on another
node.

Ok running only another VM on the HostedEngine never caused me any trouble,
except with the recent restore, all vm's on this domain need to be down,
since it wanted to detach the domain. Can't remember that was always the
case, with prior restores (this was pre ansible deploy's).
Searching through https://lists.ovirt.org/ for the discussion you're
referring at seems no easy feat..so many threads about the HostedEngine..if
you would know a more detailed pointer, that would be great.

Thanks Olaf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200130/f1b9dfa0/attachment.html>


More information about the Gluster-users mailing list