[Bugs] [Bug 1468186] [Geo-rep]: entry failed to sync to slave with ENOENT errror

bugzilla at redhat.com bugzilla at redhat.com
Sat Aug 5 11:02:15 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1468186

Rahul Hinduja <rhinduja at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|ON_QA                       |VERIFIED



--- Comment #8 from Rahul Hinduja <rhinduja at redhat.com> ---
There were multiple consequences for this bug:

1. ENTRY Errors in the logs
2. Data loss at the slave (Either the whole directory was missing or few files
from it)

Was able to reproduce this issue on 3.2.0 (3.8.4-18) build using the following
steps:

1. touch dir1 => This is to find which subvolume the file hashes too  
2. rm dir1
3. mkdir dir1 and create some files inside it (touch {1..99})
4. Let it sync to slave
5. Stop the geo-replication
6. Attach gdb to mount pid and breakpoint at dht_rmdir_lock_cbk 
7. continue
8. rm -rf dir1/ 
9. Kill the complete Hashed subvolume (captured from step 1)
10. continue
11. Start volume with force (bring back bricks)
12. ls /mnt/dir1
13. Wait for dht heal
14. Write some more files into dir/ {touch file{1..99}}
15. Start the geo-replication

On 3.2.0_async builds, tried the above use case twice and following were
results:

1. In first instance some of the files were missing from slave
2. In second instance directory dir1 was missing from slave and Entry failures
were reported

Tried the same case with build:
glusterfs-geo-replication-3.8.4-37.el7rhgs.x86_64

In both the iterations, the files were properly synced to slave without any
Entry errors in the logs. Moving this bug to verified state

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=3kTFrhekMA&a=cc_unsubscribe


More information about the Bugs mailing list