[Bugs] [Bug 1287456] New: [geo-rep]: Recommended Shared volume use on geo-replication is broken

bugzilla at redhat.com bugzilla at redhat.com
Wed Dec 2 07:16:00 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1287456

            Bug ID: 1287456
           Summary: [geo-rep]: Recommended Shared volume use on
                    geo-replication is broken
           Product: GlusterFS
           Version: 3.7.6
         Component: geo-replication
          Keywords: Regression, ZStream
          Severity: urgent
          Assignee: bugs at gluster.org
          Reporter: khiremat at redhat.com
                CC: bugs at gluster.org, byarlaga at redhat.com,
                    chrisw at redhat.com, csaba at redhat.com,
                    gluster-bugs at redhat.com, khiremat at redhat.com,
                    nlevinki at redhat.com, rhinduja at redhat.com,
                    storage-qa-internal at redhat.com
        Depends On: 1285295, 1285488



+++ This bug was initially created as a clone of Bug #1285488 +++

+++ This bug was initially created as a clone of Bug #1285295 +++

Description of problem:
=======================

Using shared volume for geo-replicaion is recommended so that only one worker
from a subvolume becomes ACTIVE and participates in syncing. Now all the bricks
in a subvolume becomes ACTIVE as:

[root at dhcp37-165 ~]# gluster volume geo-replication master 10.70.37.99::slave
status

MASTER NODE                          MASTER VOL    MASTER BRICK         SLAVE
USER    SLAVE                 SLAVE NODE      STATUS    CRAWL STATUS      
LAST_SYNCED                  
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dhcp37-165.lab.eng.blr.redhat.com    master        /rhs/brick1/ct-b1    root   
      10.70.37.99::slave    10.70.37.99     Active    Changelog Crawl   
2015-11-25 14:09:19          
dhcp37-165.lab.eng.blr.redhat.com    master        /rhs/brick2/ct-b7    root   
      10.70.37.99::slave    10.70.37.99     Active    Changelog Crawl   
2015-11-25 14:09:19          
dhcp37-110.lab.eng.blr.redhat.com    master        /rhs/brick1/ct-b5    root   
      10.70.37.99::slave    10.70.37.112    Active    Changelog Crawl   
2015-11-25 14:09:27          
dhcp37-160.lab.eng.blr.redhat.com    master        /rhs/brick1/ct-b3    root   
      10.70.37.99::slave    10.70.37.162    Active    Changelog Crawl   
2015-11-25 14:09:27          
dhcp37-158.lab.eng.blr.redhat.com    master        /rhs/brick1/ct-b4    root   
      10.70.37.99::slave    10.70.37.87     Active    Changelog Crawl   
2015-11-25 14:51:50          
dhcp37-155.lab.eng.blr.redhat.com    master        /rhs/brick1/ct-b6    root   
      10.70.37.99::slave    10.70.37.88     Active    Changelog Crawl   
2015-11-25 14:51:47          
dhcp37-133.lab.eng.blr.redhat.com    master        /rhs/brick1/ct-b2    root   
      10.70.37.99::slave    10.70.37.199    Active    Changelog Crawl   
2015-11-25 14:51:50          
dhcp37-133.lab.eng.blr.redhat.com    master        /rhs/brick2/ct-b8    root   
      10.70.37.99::slave    10.70.37.199    Active    Changelog Crawl   
2015-11-25 14:51:48          
[root at dhcp37-165 ~]# 

[root at dhcp37-165 geo-rep]# ls
cbe0236c-db59-48eb-b3eb-2e436a505e11_32530124-055f-4dd8-a7cc-d8c8ebeb91bb_subvol_1.lock
cbe0236c-db59-48eb-b3eb-2e436a505e11_32530124-055f-4dd8-a7cc-d8c8ebeb91bb_subvol_2.lock
cbe0236c-db59-48eb-b3eb-2e436a505e11_32530124-055f-4dd8-a7cc-d8c8ebeb91bb_subvol_3.lock
cbe0236c-db59-48eb-b3eb-2e436a505e11_32530124-055f-4dd8-a7cc-d8c8ebeb91bb_subvol_4.lock
[root at dhcp37-165 geo-rep]# 


[root at dhcp37-165 syncdaemon]# gluster volume geo-replication master
10.70.37.99::slave config use_meta_volumetrue
[root at dhcp37-165 syncdaemon]# 


Version-Release number of selected component (if applicable):
=============================================================



How reproducible:
=================

1/1


Steps to Reproduce:
===================
1. Create Master and Slave Cluster
2. Create Master and Slave volume
3. Enable shared storage on master volume "gluster v set all
cluster.enable-shared-storage enable"
4. Mount Master volume and create some data 
5. Create Geo-Rep session between master and Slave volume 
6. Enable Meta volume
7. Start the Geo-Rep session between master and slave


Actual results:
===============

All bricks on a subvolume becomes ACTIVE 

Expected results:
=================

Only 1 brick from each subvolume should become ACTIVE


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1285295
[Bug 1285295] [geo-rep]: Recommended Shared volume use on geo-replication
is broken in latest build
https://bugzilla.redhat.com/show_bug.cgi?id=1285488
[Bug 1285488] [geo-rep]: Recommended Shared volume use on geo-replication
is broken
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list