[Bugs] [Bug 1328706] New: [geo-rep]: geo status shows $MASTER Nodes always with hostname even if volume is configured with IP
bugzilla at redhat.com
bugzilla at redhat.com
Wed Apr 20 06:22:07 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1328706
Bug ID: 1328706
Summary: [geo-rep]: geo status shows $MASTER Nodes always with
hostname even if volume is configured with IP
Product: GlusterFS
Version: 3.7.11
Component: geo-replication
Keywords: ZStream
Severity: high
Assignee: bugs at gluster.org
Reporter: avishwan at redhat.com
CC: bugs at gluster.org, chrisw at redhat.com, csaba at redhat.com,
nlevinki at redhat.com, rhinduja at redhat.com,
rhs-bugs at redhat.com, storage-qa-internal at redhat.com
Depends On: 1327552, 1327553
+++ This bug was initially created as a clone of Bug #1327553 +++
+++ This bug was initially created as a clone of Bug #1327552 +++
Description of problem:
=======================
Currently georeplication status always returns the hostname where as the volume
info returns ip/hostname depending upon the way it is configured.
[root at dhcp37-182 ~]# gluster volume info master
Volume Name: master
Type: Distributed-Replicate
Volume ID: 3ac902da-449b-4731-b950-e8d6a88f861e
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.182:/bricks/brick0/master_brick0
Brick2: 10.70.37.90:/bricks/brick0/master_brick1
Brick3: 10.70.37.102:/bricks/brick0/master_brick2
Brick4: 10.70.37.104:/bricks/brick0/master_brick3
Brick5: 10.70.37.170:/bricks/brick0/master_brick4
Brick6: 10.70.37.169:/bricks/brick0/master_brick5
Brick7: 10.70.37.182:/bricks/brick1/master_brick6
Brick8: 10.70.37.90:/bricks/brick1/master_brick7
Brick9: 10.70.37.102:/bricks/brick1/master_brick8
Brick10: 10.70.37.104:/bricks/brick1/master_brick9
Brick11: 10.70.37.170:/bricks/brick1/master_brick10
Brick12: 10.70.37.169:/bricks/brick1/master_brick11
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
[root at dhcp37-182 ~]# gluster v geo status
MASTER NODE MASTER VOL MASTER BRICK
SLAVE USER SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dhcp37-182.lab.eng.blr.redhat.com master /bricks/brick0/master_brick0
root ssh://10.70.37.122::slave 10.70.37.144 Active
Changelog Crawl 2016-04-15 09:42:42
dhcp37-182.lab.eng.blr.redhat.com master /bricks/brick1/master_brick6
root ssh://10.70.37.122::slave 10.70.37.144 Active
Changelog Crawl 2016-04-15 09:42:41
dhcp37-102.lab.eng.blr.redhat.com master /bricks/brick0/master_brick2
root ssh://10.70.37.122::slave 10.70.37.218 Passive N/A
N/A
dhcp37-102.lab.eng.blr.redhat.com master /bricks/brick1/master_brick8
root ssh://10.70.37.122::slave 10.70.37.218 Passive N/A
N/A
dhcp37-104.lab.eng.blr.redhat.com master /bricks/brick0/master_brick3
root ssh://10.70.37.122::slave 10.70.37.175 Active
Changelog Crawl 2016-04-15 09:42:42
dhcp37-104.lab.eng.blr.redhat.com master /bricks/brick1/master_brick9
root ssh://10.70.37.122::slave 10.70.37.175 Active
Changelog Crawl 2016-04-15 09:42:41
dhcp37-169.lab.eng.blr.redhat.com master /bricks/brick0/master_brick5
root ssh://10.70.37.122::slave 10.70.37.122 Active
Changelog Crawl 2016-04-15 09:42:41
dhcp37-169.lab.eng.blr.redhat.com master
/bricks/brick1/master_brick11 root ssh://10.70.37.122::slave
10.70.37.122 Active Changelog Crawl 2016-04-15 09:42:40
dhcp37-90.lab.eng.blr.redhat.com master /bricks/brick0/master_brick1
root ssh://10.70.37.122::slave 10.70.37.217 Passive N/A
N/A
dhcp37-90.lab.eng.blr.redhat.com master /bricks/brick1/master_brick7
root ssh://10.70.37.122::slave 10.70.37.217 Passive N/A
N/A
dhcp37-170.lab.eng.blr.redhat.com master /bricks/brick0/master_brick4
root ssh://10.70.37.122::slave 10.70.37.123 Passive N/A
N/A
dhcp37-170.lab.eng.blr.redhat.com master
/bricks/brick1/master_brick10 root ssh://10.70.37.122::slave
10.70.37.123 Passive N/A N/A
[root at dhcp37-182 ~]#
Application like scheduler script (schedule_georep.py) which does comparison
between different gluster cli output (Like volume info and geo-rep status)
returns offline.
[ WARN] Geo-rep workers Faulty/Offline, Faulty: [] Offline:
['10.70.37.182:/bricks/brick0/master_brick0',
'10.70.37.90:/bricks/brick0/master_brick1',
'10.70.37.102:/bricks/brick0/master_brick2',
'10.70.37.104:/bricks/brick0/master_brick3',
'10.70.37.170:/bricks/brick0/master_brick4',
'10.70.37.169:/bricks/brick0/master_brick5',
'10.70.37.182:/bricks/brick1/master_brick6',
'10.70.37.90:/bricks/brick1/master_brick7',
'10.70.37.102:/bricks/brick1/master_brick8',
'10.70.37.104:/bricks/brick1/master_brick9',
'10.70.37.170:/bricks/brick1/master_brick10',
'10.70.37.169:/bricks/brick1/master_brick11']
Version-Release number of selected component (if applicable):
==============================================================
glusterfs-3.7.9-1.el7rhgs.x86_64
How reproducible:
=================
1/1
Steps to Reproduce:
===================
1. Configure volume using ip
2. Configure geo-replication between master and slave
3. Check geo-replication status and volume info
Actual results:
===============
Volume info shows ip and geo-replication status shows hostname for master nodes
Expected results:
=================
Geo-replication status should show the way volume is configured.
--- Additional comment from Vijay Bellur on 2016-04-15 07:31:39 EDT ---
REVIEW: http://review.gluster.org/14005 (geo-rep: Fix hostname mismatch between
volinfo and geo-rep status) posted (#2) for review on master by Aravinda VK
(avishwan at redhat.com)
--- Additional comment from Vijay Bellur on 2016-04-19 02:20:28 EDT ---
REVIEW: http://review.gluster.org/14005 (geo-rep: Fix hostname mismatch between
volinfo and geo-rep status) posted (#3) for review on master by Aravinda VK
(avishwan at redhat.com)
--- Additional comment from Vijay Bellur on 2016-04-20 02:21:32 EDT ---
COMMIT: http://review.gluster.org/14005 committed in master by Aravinda VK
(avishwan at redhat.com)
------
commit bc89311aff62c78102ab6920077b6782ee99689a
Author: Aravinda VK <avishwan at redhat.com>
Date: Fri Apr 15 16:37:18 2016 +0530
geo-rep: Fix hostname mismatch between volinfo and geo-rep status
When Volume was created using IP, Gluster Volume info shows IP address
But Geo-rep shows hostname if available, So difficult to map the outputs
of Volume Info and Georep status output.
Schedule Geo-rep script(c#13279) will merge the output of Volume info and
Geo-rep status to get offline brick nodes information. This script was
failing since host info shown in Volinfo is different from Georep status.
Script was showing all nodes as offline.
With this patch Geo-rep gets host info from volinfo->bricks instead of
getting from hostname. Geo-rep status will now show same hostname/IP which
was used in Volume Create.
BUG: 1327553
Change-Id: Ib8e56da29129aa19225504a891f9b870f269ab75
Signed-off-by: Aravinda VK <avishwan at redhat.com>
Reviewed-on: http://review.gluster.org/14005
NetBSD-regression: NetBSD Build System <jenkins at build.gluster.org>
CentOS-regression: Gluster Build System <jenkins at build.gluster.com>
Smoke: Gluster Build System <jenkins at build.gluster.com>
Reviewed-by: Saravanakumar Arumugam <sarumuga at redhat.com>
Reviewed-by: Kotresh HR <khiremat at redhat.com>
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1327552
[Bug 1327552] [geo-rep]: geo status shows $MASTER Nodes always with
hostname even if volume is configured with IP
https://bugzilla.redhat.com/show_bug.cgi?id=1327553
[Bug 1327553] [geo-rep]: geo status shows $MASTER Nodes always with
hostname even if volume is configured with IP
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list