<div dir="ltr">Are you using OpenSUSE by chance, or in a similar situation? <a href="https://github.com/gluster/glusterfs/issues/2648">https://github.com/gluster/glusterfs/issues/2648</a><div><br></div><div>In my case, it was switching from Gluster repo builds to the main update repo builds, which had different options.<br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><br>Sincerely,<br>Artem<br><br>--<br>Founder, <a href="http://www.androidpolice.com" target="_blank">Android Police</a>, <a href="http://www.apkmirror.com/" style="font-size:12.8px" target="_blank">APK Mirror</a><span style="font-size:12.8px">, Illogical Robot LLC</span></div><div dir="ltr"><a href="http://beerpla.net/" target="_blank">beerpla.net</a> | <a href="http://twitter.com/ArtemR" target="_blank">@ArtemR</a><br></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Sep 21, 2021 at 8:00 AM Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Another option that comes to my mind is to use dnsmasq locally (/etc/resolv.conf pointing to it) as a caching layer and thus you will be able to survive a DNS issue . This is how we run our whole infra as we solely rely on FQDNs.<br>
<br>
Of course it has it's own drawbacks, so it should be considered carefully.<br>
<br>
P.S. If you decide to go that way, don't forget to put 127.0.0.1 as the first resolver and the "upstream" dns on the second & third location. Prevents dns issues when restarting dnsmasq.<br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
<br>
<br>
<br>
<br>
<br>
<br>
В вторник, 21 септември 2021 г., 17:51:25 ч. Гринуич+3, Erik Jacobson <<a href="mailto:erik.jacobson@hpe.com" target="_blank">erik.jacobson@hpe.com</a>> написа: <br>
<br>
<br>
<br>
<br>
<br>
There is a discussion in -devel as well. I came at this just thinking<br>
"an update should work" and did take a quick look at release notes for<br>
9.0 and 9.3. Come to think of it, I didn't read the Gluster8 relnotes<br>
so maybe that's why I missed this. We were at 7.9 and I read 9.0 and<br>
9.3.<br>
<br>
We can't really disable IPV6 100% here. Well we could today but we'd<br>
have to open it again in a couple months. Our main head node already<br>
needs to talk to some IPV6-only stuff while also talking to IPV4 stuff.<br>
These leaders (gluster servers) will need to speak IPV6 very soon at least<br>
minimally. Some controllers are starting to appear, which these 'leader'<br>
nodes need to talk to, that are IPV6-only.<br>
<br>
It sounds like what you wrote is true though, that if there is any IPV6<br>
around that function thinks that's what you want is IPV6. A couple<br>
private replies (thank you!!) also mentioned this.<br>
<br>
Maybe we'll have to make a more formal version of the patch rather than<br>
just force-setting IPV4 (for our internal use) later on.<br>
<br>
Basically, I am in the "once in a year" window where I can update<br>
gluster and get complete testing to be sure we don't have regressions so<br>
we'll keep moving forward with 9.3 with the ipv4 hack in place for now.<br>
<br>
This helps me get the context thank you for this note !!<br>
<br>
Erik<br>
<br>
On Tue, Sep 21, 2021 at 02:44:36PM +0000, Strahil Nikolov wrote:<br>
> As gf_resolve_ip6 fails, I guess you can disable ipv6 on the host (if not using<br>
> the protocol) and check if it will workaround the problem till it's solved.<br>
> <br>
> For RH you can check <a href="https://access.redhat.com/solutions/8709" rel="noreferrer" target="_blank">https://access.redhat.com/solutions/8709</a> (use RH dev<br>
> subscription to read it, or ping me directly and I will try to summarize it for<br>
> your OS version).<br>
> <br>
> <br>
> Best Regards,<br>
> Strahil Nikolov<br>
> <br>
> <br>
> On Mon, Sep 20, 2021 at 19:35, Erik Jacobson<br>
> <<a href="mailto:erik.jacobson@hpe.com" target="_blank">erik.jacobson@hpe.com</a>> wrote:<br>
> I missed the other important log snip:<br>
> <br>
> The message "E [MSGID: 101075] [common-utils.c:520:gf_resolve_ip6]<br>
> 0-resolver: error in getaddrinfo [{family=10}, {ret=Address family for<br>
> hostname not supported}]" repeated 620 times between [2021-09-20<br>
> 15:49:23.720633 +0000] and [2021-09-20 15:50:41.731542 +0000]<br>
> <br>
> So I will dig in to the code some here.<br>
> <br>
> <br>
> On Mon, Sep 20, 2021 at 10:59:30AM -0500, Erik Jacobson wrote:<br>
> > Hello all! I hope you are well.<br>
> ><br>
> > We are starting a new software release cycle and I am trying to find a<br>
> > way to upgrade customers from our build of gluster 7.9 to our build of<br>
> > gluster 9.3<br>
> ><br>
> > When we deploy gluster, we foribly remove all references to any host<br>
> > names and use only IP addresses. This is because, if for any reason a<br>
> > DNS server is unreachable, even if the peer files have IPs and DNS, it<br>
> > causes glusterd to be unable to reach peers properly. We can't really<br>
> > rely on /etc/hosts either because customers take artistic licene with<br>
> > their /etc/hosts files and don't realize that problems that can cause.<br>
> ><br>
> > So our deployed peer files look something like this:<br>
> ><br>
> > uuid=46a4b506-029d-4750-acfb-894501a88977<br>
> > state=3<br>
> > hostname1=172.23.0.16<br>
> ><br>
> > That is, with full intention, we avoid host names.<br>
> ><br>
> > When we upgrade to gluster 9.3, we fall over with these errors and<br>
> > gluster is now partitioned and the updated gluster servers can't reach<br>
> > anybody:<br>
> ><br>
> > [2021-09-20 15:50:41.731543 +0000] E<br>
> [name.c:265:af_inet_client_get_remote_sockaddr] 0-management: DNS<br>
> resolution failed on host 172.23.0.16<br>
> ><br>
> ><br>
> > As you can see, we have defined on purpose everything using IPs but in<br>
> > 9.3 it appears this method fails. Are there any suggestions short of<br>
> > putting real host names in peer files?<br>
> ><br>
> ><br>
> ><br>
> > FYI<br>
> ><br>
> > This supercomputer will be using gluster for part of its system<br>
> > management. It is how we deploy the Image Objects (squashfs images)<br>
> > hosted on NFS today and served by gluster leader nodes and also store<br>
> > system logs, console logs, and other data.<br>
> ><br>
> > <a href="https://www.olcf.ornl.gov/frontier/" rel="noreferrer" target="_blank">https://www.olcf.ornl.gov/frontier/</a><br>
> ><br>
> ><br>
> > Erik<br>
> > ________<br>
> ><br>
> ><br>
> ><br>
> > Community Meeting Calendar:<br>
> ><br>
> > Schedule -<br>
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
> > Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> <br>
> ________<br>
> <br>
> <br>
> <br>
> Community Meeting Calendar:<br>
> <br>
> Schedule -<br>
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
> Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> <br>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>