[Bugs] [Bug 1695436] New: geo-rep session creation fails with IPV6

bugzilla at redhat.com bugzilla at redhat.com
Wed Apr 3 05:56:54 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1695436

            Bug ID: 1695436
           Summary: geo-rep session creation fails with IPV6
           Product: GlusterFS
           Version: 6
          Hardware: x86_64
                OS: Linux
            Status: NEW
         Component: geo-replication
          Severity: high
          Priority: high
          Assignee: bugs at gluster.org
          Reporter: avishwan at redhat.com
                CC: amukherj at redhat.com, avishwan at redhat.com,
                    bugs at gluster.org, csaba at redhat.com,
                    khiremat at redhat.com, rhs-bugs at redhat.com,
                    sankarshan at redhat.com, sasundar at redhat.com,
                    storage-qa-internal at redhat.com
        Depends On: 1688833
            Blocks: 1688231, 1688239
  Target Milestone: ---
    Classification: Community



+++ This bug was initially created as a clone of Bug #1688833 +++

+++ This bug was initially created as a clone of Bug #1688231 +++

Description of problem:
-----------------------
This issue is seen with the RHHI-V usecase. VM images are stored in the gluster
volumes and geo-replicated to the secondary site, for DR use case.

When IPv6 is used, the additional mount option is required
--xlator-option=transport.address-family=inet6". But when geo-rep check for
slave space with gverify.sh, these mount options are not considered and it
fails to mount either master or slave volume

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
RHGS 3.4.4 ( glusterfs-3.12.2-47 )

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Create geo-rep session from the master to slave

Actual results:
--------------
Creation of geo-rep session fails at gverify.sh

Expected results:
-----------------
Creation of geo-rep session should be successful

Additional info:

--- Additional comment from SATHEESARAN on 2019-03-13 11:49:02 UTC ---

[root@ ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
2620:52:0:4624:5054:ff:fee9:57f8 master.lab.eng.blr.redhat.com 
2620:52:0:4624:5054:ff:fe6d:d816 slave.lab.eng.blr.redhat.com 

[root@ ~]# gluster volume info

Volume Name: master
Type: Distribute
Volume ID: 9cf0224f-d827-4028-8a45-37f7bfaf1c78
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: master.lab.eng.blr.redhat.com:/gluster/brick1/master
Options Reconfigured:
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
user.cifs: off
features.shard: on
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet6
nfs.disable: on

[root at localhost ~]# gluster volume geo-replication master
slave.lab.eng.blr.redhat.com::slave create push-pem
Unable to mount and fetch slave volume details. Please check the log:
/var/log/glusterfs/geo-replication/gverify-slavemnt.log
geo-replication command failed


Snip from gverify-slavemnt.log
<snip>
[2019-03-13 11:46:28.746494] I [MSGID: 100030] [glusterfsd.c:2646:main]
0-glusterfs: Started running glusterfs version 3.12.2 (args: glusterfs
--xlator-option=*dht.lookup-unhashed=off --volfile-server
slave.lab.eng.blr.redhat.com --volfile-id slave -l
/var/log/glusterfs/geo-replication/gverify-slavemnt.log /tmp/gverify.sh.y1TCoY)
[2019-03-13 11:46:28.750595] W [MSGID: 101002] [options.c:995:xl_opt_validate]
0-glusterfs: option 'address-family' is deprecated, preferred is
'transport.address-family', continuing with correction
[2019-03-13 11:46:28.753702] E [MSGID: 101075]
[common-utils.c:482:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2)
(Name or service not known)
[2019-03-13 11:46:28.753725] E [name.c:267:af_inet_client_get_remote_sockaddr]
0-glusterfs: DNS resolution failed on host slave.lab.eng.blr.redhat.com
[2019-03-13 11:46:28.753953] I [glusterfsd-mgmt.c:2337:mgmt_rpc_notify]
0-glusterfsd-mgmt: disconnected from remote-host: slave.lab.eng.blr.redhat.com
[2019-03-13 11:46:28.753980] I [glusterfsd-mgmt.c:2358:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile servers
[2019-03-13 11:46:28.753998] I [MSGID: 101190]
[event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 0
[2019-03-13 11:46:28.754073] I [MSGID: 101190]
[event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 1
[2019-03-13 11:46:28.754154] W [glusterfsd.c:1462:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_notify+0xab) [0x7fc39d379bab]
-->glusterfs(+0x11fcd) [0x56427db95fcd] -->glusterfs(cleanup_and_exit+0x6b)
[0x56427db8eb2b] ) 0-: received signum (1), shutting down
[2019-03-13 11:46:28.754197] I [fuse-bridge.c:6611:fini] 0-fuse: Unmounting
'/tmp/gverify.sh.y1TCoY'.
[2019-03-13 11:46:28.760213] I [fuse-bridge.c:6616:fini] 0-fuse: Closing fuse
connection to '/tmp/gverify.sh.y1TCoY'.
</snip>

--- Additional comment from Worker Ant on 2019-03-14 14:51:56 UTC ---

REVIEW: https://review.gluster.org/22363 (WIP geo-rep: IPv6 support) posted
(#1) for review on master by Aravinda VK

--- Additional comment from Worker Ant on 2019-03-15 14:59:56 UTC ---

REVIEW: https://review.gluster.org/22363 (geo-rep: IPv6 support) merged (#3) on
master by Aravinda VK


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1688231
[Bug 1688231] geo-rep session creation fails with IPV6
https://bugzilla.redhat.com/show_bug.cgi?id=1688239
[Bug 1688239] geo-rep session creation fails with IPV6
https://bugzilla.redhat.com/show_bug.cgi?id=1688833
[Bug 1688833] geo-rep session creation fails with IPV6
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list