[Bugs] [Bug 1577627] [Geo-rep]: Status in ACTIVE/Created state

bugzilla at redhat.com bugzilla at redhat.com
Mon May 14 05:17:22 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1577627

Kotresh HR <khiremat at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |khiremat at redhat.com



--- Comment #2 from Kotresh HR <khiremat at redhat.com> ---
Description of problem:
=======================
Geo-replication status was CREATED/ACTIVE as opposed to ACTIVE/PASSIVE.

Geo-replication session was started and the following was shown as the status
of the session:
----------------------------------------------------------------------------------------------
[root at dhcp41-226 scripts]# gluster volume geo-replication master
10.70.41.160::slave status

MASTER NODE     MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE            
     SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.41.226    master        /rhs/brick3/b7    root         
10.70.41.160::slave    N/A             Created    N/A                N/A        
10.70.41.226    master        /rhs/brick1/b1    root         
10.70.41.160::slave    N/A             Created    N/A                N/A        
10.70.41.230    master        /rhs/brick2/b5    root         
10.70.41.160::slave    N/A             Created    N/A                N/A        
10.70.41.229    master        /rhs/brick2/b4    root         
10.70.41.160::slave    N/A             Created    N/A                N/A        
10.70.41.219    master        /rhs/brick2/b6    root         
10.70.41.160::slave    N/A             Created    N/A                N/A        
10.70.41.227    master        /rhs/brick3/b8    root         
10.70.41.160::slave    N/A             Created    N/A                N/A        
10.70.41.227    master        /rhs/brick1/b2    root         
10.70.41.160::slave    N/A             Created    N/A                N/A        
10.70.41.228    master        /rhs/brick3/b9    root         
10.70.41.160::slave    10.70.41.160    Active     Changelog Crawl    2018-04-23
06:13:53          
10.70.41.228    master        /rhs/brick1/b3    root         
10.70.41.160::slave    10.70.42.79     Active     Changelog Crawl    2018-04-23
06:13:53        




Version-Release number of selected component (if applicable):
============================================================



How reproducible:
=================
2/2

Steps to Reproduce:
===================
1. Create Master and a Slave cluster from 6 nodes (each)
2. Create and Start master volume (Tiered: cold-tier 1x(4+2)  and hot-tier 1x3)
4. Create and Start slave volume (Tiered: cold-tier 1x(4+2)  and hot-tier 1x3)
5. Enable quota on master volume 
6. Enable shared storage on master volume
7. Setup geo-rep session between master and slave volume 
8. Mount master volume on client 
9. Create data from master client

Actual results:
==============
gsyncd was down on 5 nodes out of 6
Once started, the geo-rep status was ACTIVE/created


Expected results:
=================
gsyncd should be up on all nodes
Once started, the geo-rep status should be in ACTIVE/PASSIVE

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list