[Bugs] [Bug 1224100] New: [geo-rep]: Even after successful sync, the DATA counter did not reset to 0

bugzilla at redhat.com bugzilla at redhat.com
Fri May 22 08:28:55 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1224100

            Bug ID: 1224100
           Summary: [geo-rep]: Even after successful sync, the DATA
                    counter did not reset to 0
           Product: GlusterFS
           Version: 3.7.0
         Component: geo-replication
          Severity: medium
          Assignee: bugs at gluster.org
          Reporter: avishwan at redhat.com
                CC: aavati at redhat.com, bugs at gluster.org, csaba at redhat.com,
                    gluster-bugs at redhat.com, nlevinki at redhat.com,
                    rhinduja at redhat.com, rhs-bugs at redhat.com,
                    storage-qa-internal at redhat.com
        Depends On: 1223695, 1224098
            Blocks: 1223636



+++ This bug was initially created as a clone of Bug #1224098 +++

+++ This bug was initially created as a clone of Bug #1223695 +++

Description of problem:
=======================

The purpose of DATA counter in "status detail" is to provide information about
the pending que to sync. Once the sync is successful, the counter should reset
to 0. Which is not happening.

[root at georep1 scripts]# gluster volume geo-replication master
10.70.46.154::slave status detail

MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE             
    SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED            ENTRY 
  DATA    META    FAILURES    CHECKPOINT TIME        CHECKPOINT COMPLETED   
CHECKPOINT COMPLETION TIME   
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root         
10.70.46.154::slave    10.70.46.101    Passive    N/A                N/A       
            N/A      N/A     N/A     N/A         N/A                    N/A    
                N/A                          
georep1        master        /rhs/brick2/b2    root         
10.70.46.154::slave    10.70.46.101    Passive    N/A                N/A       
            N/A      N/A     N/A     N/A         N/A                    N/A    
                N/A                          
georep3        master        /rhs/brick1/b1    root         
10.70.46.154::slave    10.70.46.154    Active     Changelog Crawl    2015-05-21
14:03:50    0        377     0       0           2015-05-21 14:32:54    No     
                N/A                          
georep3        master        /rhs/brick2/b2    root         
10.70.46.154::slave    10.70.46.154    Active     Changelog Crawl    2015-05-21
14:32:20    0        372     0       0           2015-05-21 14:32:54    No     
                N/A                          
georep2        master        /rhs/brick1/b1    root         
10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A       
            N/A      N/A     N/A     N/A         N/A                    N/A    
                N/A                          
georep2        master        /rhs/brick2/b2    root         
10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A       
            N/A      N/A     N/A     N/A         N/A                    N/A    
                N/A                          
[root at georep1 scripts]# 


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.0


How reproducible:
=================
2/2


Steps to Reproduce:
===================
1. Create and Start master volume
2. Create and Start slave volume
3. Create and Start meta volume
4. Create and Start geo-rep between master and slave
5. Mount the master and slave volume
6. Create files/directories on the master volume.
7. Execute status detail command from master node. You will observe the
increment in the entry and data counter. 
8. Let the sync complete.
9. Calculate checksum of master and slave volume to confirm that the sync is
completed.
10. Once sync complete, check the status detail again.

Actual results:
===============

The entry counter is reset to 0, but data counter is still has values like 377


Expected results:
=================

All the counters should reset to 0, indicating that nothing is pending to sync.


Additional info:
=================

Arequal infor for master and slave


[root at wingo master]# /root/scripts/arequal-checksum -p /mnt/master

Entry counts
Regular files   : 519
Directories     : 140
Symbolic links  : 114
Other           : 0
Total           : 773

Metadata checksums
Regular files   : 47e250
Directories     : 3e9
Symbolic links  : 3e9
Other           : 3e9

Checksums
Regular files   : 4f4af7ac217c3da67e7270a056d2fba
Directories     : 356e0d5141064d2c
Symbolic links  : 7313722a0c5b0a7b
Other           : 0
Total           : ed0afdd694c554b
[root at wingo master]# 


[root at wingo slave]# /root/scripts/arequal-checksum -p /mnt/slave

Entry counts
Regular files   : 519
Directories     : 140
Symbolic links  : 114
Other           : 0
Total           : 773

Metadata checksums
Regular files   : 47e250
Directories     : 3e9
Symbolic links  : 3e9
Other           : 3e9

Checksums
Regular files   : 4f4af7ac217c3da67e7270a056d2fba
Directories     : 356e0d5141064d2c
Symbolic links  : 7313722a0c5b0a7b
Other           : 0
Total           : ed0afdd694c554b
[root at wingo slave]#


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1223636
[Bug 1223636] 3.1 QE Tracker
https://bugzilla.redhat.com/show_bug.cgi?id=1223695
[Bug 1223695] [geo-rep]: Even after successful sync, the DATA counter did
not reset to 0
https://bugzilla.redhat.com/show_bug.cgi?id=1224098
[Bug 1224098] [geo-rep]: Even after successful sync, the DATA counter did
not reset to 0
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list