[Bugs] [Bug 1537602] Georeplication tests intermittently fail

bugzilla at redhat.com bugzilla at redhat.com
Tue Jan 23 15:24:54 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1537602

Shyamsundar <srangana at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |srangana at redhat.com



--- Comment #2 from Shyamsundar <srangana at redhat.com> ---
The test fails in the following runs:

https://build.gluster.org/job/centos6-regression/8602/console
https://build.gluster.org/job/centos6-regression/8604/console
https://build.gluster.org/job/centos6-regression/8607/console
https://build.gluster.org/job/centos6-regression/8608/console
https://build.gluster.org/job/centos6-regression/8612/console

Failure is almost always when checking for which nodes are in "Active" and
"Passive" states, 

06:58:02 not ok 22 Got "1" instead of "2", LINENUM:83
06:58:02 FAILED COMMAND: 2 check_status_num_rows Passive
AND/OR
06:58:02 not ok 37 Got "1" instead of "2", LINENUM:102
06:58:02 FAILED COMMAND: 2 check_status_num_rows Passive

On checking slave25 and rerunning this test (from a fresh clone of the sources
etc.) post step in line 83 it is noted that the command output looks as
follows,


[root at slave25 ~]# gluster volume geo-replication master 127.0.0.1::slave status
detail

MASTER NODE                  MASTER VOL    MASTER BRICK           SLAVE USER   
SLAVE               SLAVE NODE                   STATUS             CRAWL
STATUS       LAST_SYNCED            ENTRY    DATA    META    FAILURES   
CHECKPOINT TIME    CHECKPOINT COMPLETED    CHECKPOINT COMPLETION TIME   
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
slave25.cloud.gluster.org    master        /d/backends/master1    root         
127.0.0.1::slave    slave25.cloud.gluster.org    Active             Changelog
Crawl    2018-01-23 14:17:38    7        0       0       0           N/A       
        N/A                     N/A                          
slave25.cloud.gluster.org    master        /d/backends/master2    root         
127.0.0.1::slave    N/A                          Faulty             N/A        
       N/A                    N/A      N/A     N/A     N/A         N/A         
      N/A                     N/A                          
slave25.cloud.gluster.org    master        /d/backends/master3    root         
127.0.0.1::slave    N/A                          Initializing...    N/A        
       N/A                    N/A      N/A     N/A     N/A         N/A         
      N/A                     N/A                          
slave25.cloud.gluster.org    master        /d/backends/master4    root         
127.0.0.1::slave    slave25.cloud.gluster.org    Active             Changelog
Crawl    2018-01-23 14:17:38    9        0       0       0           N/A       
        N/A                     N/A   

The above never recovers, and so is not a timing issue per-se.

Can someone from the geo-rep team take a look at the logs from those runs to
determine what is going wrong and why is the status "Faulty" or Initializing"
as that seem to be th estart of the test failure.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=FcgLhWbDXq&a=cc_unsubscribe


More information about the Bugs mailing list