[Bugs] [Bug 1234898] New: [geo-rep]: Feature fan-out fails with the use of meta volume config
bugzilla at redhat.com
bugzilla at redhat.com
Tue Jun 23 13:11:00 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1234898
Bug ID: 1234898
Summary: [geo-rep]: Feature fan-out fails with the use of meta
volume config
Product: GlusterFS
Version: 3.7.0
Component: geo-replication
Keywords: Regression
Severity: urgent
Assignee: bugs at gluster.org
Reporter: khiremat at redhat.com
CC: aavati at redhat.com, bugs at gluster.org, csaba at redhat.com,
gluster-bugs at redhat.com, khiremat at redhat.com,
nlevinki at redhat.com, rcyriac at redhat.com,
rhinduja at redhat.com, storage-qa-internal at redhat.com
Depends On: 1234419, 1234882
+++ This bug was initially created as a clone of Bug #1234882 +++
+++ This bug was initially created as a clone of Bug #1234419 +++
Description of problem:
=======================
When the geo-rep session was created between 2 slaves, one slaves bricks all
becomes PASSIVE. It is only with the use of meta volume config set to true.
Slave volumes: slave1 and slave2
Creating geo-rep Session between master volume and slave volumes
(slave1,slave2)
[root at georep1 scripts]# gluster volume geo-replication master
10.70.46.154::slave1 create push-pem force
Creating geo-replication session between master & 10.70.46.154::slave1 has been
successful
[root at georep1 scripts]# gluster volume geo-replication master
10.70.46.154::slave2 create push-pem force
Creating geo-replication session between master & 10.70.46.154::slave2 has been
successful
[root at georep1 scripts]#
Setting the use-meta-volume for slave1 and slave2 volume:
[root at georep1 scripts]# gluster volume geo-replication master
10.70.46.154::slave1 config use_meta_volume true
geo-replication config updated successfully
[root at georep1 scripts]# gluster volume geo-replication master
10.70.46.154::slave2 config use_meta_volume true
geo-replication config updated successfully
[root at georep1 scripts]#
Starting geo-rep session for slave volumes slave1, slave2
[root at georep1 scripts]# gluster volume geo-replication master
10.70.46.154::slave1 start
Starting geo-replication session between master & 10.70.46.154::slave1 has been
successful
[root at georep1 scripts]#
[root at georep1 scripts]# gluster volume geo-replication master
10.70.46.154::slave2 start
Starting geo-replication session between master & 10.70.46.154::slave2 has been
successful
[root at georep1 scripts]#
Status:
=======
[root at georep1 scripts]# gluster volume geo-replication master
10.70.46.154::slave1 status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
georep1 master /rhs/brick1/b1 root
10.70.46.154::slave1 10.70.46.101 Active Changelog Crawl
2015-06-23 00:46:12
georep1 master /rhs/brick2/b2 root
10.70.46.154::slave1 10.70.46.101 Active Changelog Crawl
2015-06-23 00:46:12
georep3 master /rhs/brick1/b1 root
10.70.46.154::slave1 10.70.46.154 Passive N/A N/A
georep3 master /rhs/brick2/b2 root
10.70.46.154::slave1 10.70.46.154 Passive N/A N/A
georep2 master /rhs/brick1/b1 root
10.70.46.154::slave1 10.70.46.103 Passive N/A N/A
georep2 master /rhs/brick2/b2 root
10.70.46.154::slave1 10.70.46.103 Passive N/A N/A
[root at georep1 scripts]#
[root at georep1 scripts]#
[root at georep1 scripts]# gluster volume geo-replication master
10.70.46.154::slave2 status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
------------------------------------------------------------------------------------------------------------------------------------------
georep1 master /rhs/brick1/b1 root
10.70.46.154::slave2 10.70.46.101 Passive N/A N/A
georep1 master /rhs/brick2/b2 root
10.70.46.154::slave2 10.70.46.101 Passive N/A N/A
georep3 master /rhs/brick1/b1 root
10.70.46.154::slave2 10.70.46.154 Passive N/A N/A
georep3 master /rhs/brick2/b2 root
10.70.46.154::slave2 10.70.46.154 Passive N/A N/A
georep2 master /rhs/brick1/b1 root
10.70.46.154::slave2 10.70.46.103 Passive N/A N/A
georep2 master /rhs/brick2/b2 root
10.70.46.154::slave2 10.70.46.103 Passive N/A N/A
[root at georep1 scripts]#
The second slave volume slave2 has all the passive bricks, and hence the sync
never happens to the slave2 volume.
Meta volume bricks:
[root at georep1 scripts]# ls /var/run/gluster/ss_brick/geo-rep/
6f023fd5-49a5-4af7-a68a-b7071a8b9ff0_subvol_1.lock
6f023fd5-49a5-4af7-a68a-b7071a8b9ff0_subvol_2.lock
[root at georep1 scripts]#
Version-Release number of selected component (if applicable):
==============================================================
How reproducible:
=================
1/1
Master:
=======
[root at georep1 scripts]# gluster volume info
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: 102b304d-494a-40cc-84e0-3eca89b3e559
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.97:/var/run/gluster/ss_brick
Brick2: 10.70.46.93:/var/run/gluster/ss_brick
Brick3: 10.70.46.96:/var/run/gluster/ss_brick
Options Reconfigured:
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
Volume Name: master
Type: Distributed-Replicate
Volume ID: 6f023fd5-49a5-4af7-a68a-b7071a8b9ff0
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.46.96:/rhs/brick1/b1
Brick2: 10.70.46.97:/rhs/brick1/b1
Brick3: 10.70.46.93:/rhs/brick1/b1
Brick4: 10.70.46.96:/rhs/brick2/b2
Brick5: 10.70.46.97:/rhs/brick2/b2
Brick6: 10.70.46.93:/rhs/brick2/b2
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
[root at georep1 scripts]#
Slave:
======
[root at georep4 scripts]# gluster volume info
Volume Name: slave1
Type: Replicate
Volume ID: fc1e64c2-2028-4977-844a-678f4cc31351
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.154:/rhs/brick1/b1
Brick2: 10.70.46.101:/rhs/brick1/b1
Brick3: 10.70.46.103:/rhs/brick1/b1
Options Reconfigured:
performance.readdir-ahead: on
Volume Name: slave2
Type: Replicate
Volume ID: 800f46c8-2708-48e5-9256-df8dbbdc5906
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.154:/rhs/brick2/b2
Brick2: 10.70.46.101:/rhs/brick2/b2
Brick3: 10.70.46.103:/rhs/brick2/b2
Options Reconfigured:
performance.readdir-ahead: on
[root at georep4 scripts]#
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1234419
[Bug 1234419] [geo-rep]: Feature fan-out fails with the use of meta volume
config
https://bugzilla.redhat.com/show_bug.cgi?id=1234882
[Bug 1234882] [geo-rep]: Feature fan-out fails with the use of meta volume
config
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list