[Bugs] [Bug 1609799] New: IPv6 setup broken after updating to 4.1

bugzilla at redhat.com bugzilla at redhat.com
Mon Jul 30 13:33:02 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1609799

            Bug ID: 1609799
           Summary: IPv6 setup broken after updating to 4.1
           Product: GlusterFS
           Version: 4.1
         Component: transport
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: kompastver at gmail.com
                CC: bugs at gluster.org



Description of problem:

After updating existing cluster from 3.10 to 4.1 our setup stopped working.
Our cluster worked in IPv6-only environment.
And as I can see in tcpdump glusterfs 4.1 is trying to get only A (without
AAAA) record for the other cluster member.

Version-Release number of selected component (if applicable):
glusterfs 4.1.1

How reproducible:
Setup glusterfs 4.1 in IPv6-only environment.


Actual results:
Cluster doesn't work because it doesn't see other nodes

Expected results:
Cluster works

Additional info:
In /var/log/glusterfs/glusterd.log:
[2018-07-30 13:20:05.088216] E [name.c:267:af_inet_client_get_remote_sockaddr]
0-management: DNS resolution failed on host srv1.prod                           
[2018-07-30 13:20:05.088216] E [name.c:267:af_inet_client_get_remote_sockaddr]
0-management: DNS resolution failed on host srv1.prod

~ # gluster pool list
UUID                                    Hostname                State
997fa0f6-c8d0-4207-8cef-95f25d1b9634    srv1.prod               Disconnected
c0a17e44-ea23-491f-805e-495cbd09bdf8    localhost               Connected


~ # gluster volume info test-volume

Volume Name: test-volume
Type: Distribute
Volume ID: 0a0be90a-5dd0-4d8d-98bc-0a2d9cfaf9f1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: srv1.prod:/gl
Brick2: srv2.prod:/gl2
Options Reconfigured:
transport.address-family: inet6
nfs.disable: on

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list