[Bugs] [Bug 1739320] New: The result (hostname) of getnameinfo for all bricks (ipv6 addresses) are the same, while they are not.

bugzilla at redhat.com bugzilla at redhat.com
Fri Aug 9 03:29:40 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1739320

            Bug ID: 1739320
           Summary: The result (hostname) of getnameinfo for all bricks
                    (ipv6 addresses)  are the same, while they are not.
           Product: GlusterFS
           Version: 6
          Hardware: All
                OS: Linux
            Status: NEW
         Component: glusterd
          Severity: urgent
          Assignee: bugs at gluster.org
          Reporter: amgad.saleh at nokia.com
                CC: bugs at gluster.org
  Target Milestone: ---
    Classification: Community



Description of problem:

When creating a volume using IPv6, failed with an error that bricks are on same
hostname while they are not.

The result (hostname) of getnameinfo for all bricks (ipv6 addresses)  are the
same, while they are not. 

Version-Release number of selected component (if applicable):
6.3-1 and 6.4-1

How reproducible:


Steps to Reproduce:
1. Create a volume with replica 3 using the command:
gluster --mode=script volume create vol_b6b4f444031cb86c969f3fc744f2e999
replica 3  2001:db8:1234::10:/root/test/a 2001:db8:1234::5:/root/test/a 
2001:db8:1234::14:/root/test/a
2. Error happens that all bricks on the same hostname
3. check those addresses using nslookup which shows the opposite, those IP
belongs to different hostnames

Actual results:
===============
# gluster --mode=script volume create vol_b6b4f444031cb86c969f3fc744f2e999
replica 3  2001:db8:1234::10:/root/test/a 2001:db8:1234::5:/root/test/a 
2001:db8:1234::14:/root/test/a
volume create: vol_b6b4f444031cb86c969f3fc744f2e999: failed: Multiple bricks of
a replicate volume are present on the same server. This setup is not optimal.
Bricks should be on different nodes to have best fault tolerant configuration.
Use 'force' at the end of the command if you want to override this behavior.

# nslookup 2001:db8:1234::10
Server:         2001:db8:1234::5
Address:        2001:db8:1234::5#53

0.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.4.3.2.1.8.b.d.0.1.0.0.2.ip6.arpa       
name = roger-1812-we-01.

# nslookup 2001:db8:1234::5
Server:         2001:db8:1234::5
Address:        2001:db8:1234::5#53

5.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.4.3.2.1.8.b.d.0.1.0.0.2.ip6.arpa       
name = roger-1903-we-01.

# nslookup 2001:db8:1234::14
Server:         2001:db8:1234::5
Address:        2001:db8:1234::5#53

4.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.4.3.2.1.8.b.d.0.1.0.0.2.ip6.arpa       
name = roger-1812-cwes-01.


Expected results:
Volume should succeed

Additional info:

Here are  the code snippets from glusterfs 6.4, which has the problem
For some reason,  the result (hostname) of getnameinfo for all bricks (ipv6
addresses)  are the same, while actually they are not. 
================
xlators/mgmt/glusterd/src/glusterd-volume-ops.c

gf_ai_compare_t
glusterd_compare_addrinfo(struct addrinfo *first, struct addrinfo *next)
{
    int ret = -1;
    struct addrinfo *tmp1 = NULL;
    struct addrinfo *tmp2 = NULL;
    char firstip[NI_MAXHOST] = {0.};
    char nextip[NI_MAXHOST] = {
        0,
    };

    for (tmp1 = first; tmp1 != NULL; tmp1 = tmp1->ai_next) {
        ret = getnameinfo(tmp1->ai_addr, tmp1->ai_addrlen, firstip, NI_MAXHOST,
                          NULL, 0, NI_NUMERICHOST);
        if (ret)
            return GF_AI_COMPARE_ERROR;
        for (tmp2 = next; tmp2 != NULL; tmp2 = tmp2->ai_next) {
            ret = getnameinfo(tmp2->ai_addr, tmp2->ai_addrlen, nextip,
                              NI_MAXHOST, NULL, 0, NI_NUMERICHOST);
            if (ret)
                return GF_AI_COMPARE_ERROR;
            if (!strcmp(firstip, nextip)) {
                return GF_AI_COMPARE_MATCH;
            }
        }
    }
    return GF_AI_COMPARE_NO_MATCH;
}

...
            if (GF_AI_COMPARE_MATCH == ret)
                goto found_bad_brick_order;

...

found_bad_brick_order:
    gf_msg(this->name, GF_LOG_INFO, 0, GD_MSG_BAD_BRKORDER,
           "Bad brick order found");
    if (type == GF_CLUSTER_TYPE_DISPERSE) {
        snprintf(err_str, sizeof(found_string), found_string, "disperse");
    } else {
        snprintf(err_str, sizeof(found_string), found_string, "replicate");
    }
....
   const char found_string[2048] =
        "Multiple bricks of a %s "
        "volume are present on the same server. This "
        "setup is not optimal. Bricks should be on "
        "different nodes to have best fault tolerant "
        "configuration. Use 'force' at the end of the "
        "command if you want to override this "
        "behavior. ";

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list