[Bugs] [Bug 1378547] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls

bugzilla at redhat.com bugzilla at redhat.com
Wed Dec 14 06:17:57 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1378547

Ravishankar N <ravishankar at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|needinfo?(ravishankar at redha |needinfo?(simon.turcotte-la
                   |t.com)                      |ngevin at ubisoft.com)



--- Comment #5 from Ravishankar N <ravishankar at redhat.com> ---
Hi Simon, Thanks a lot for testing!

While the steps you described does cause hang due to an infinite inode-refresh
loop, the values you set for xattrs on the back end is not a valid scenario. 
You have set them in such a way that each brick blames itself (i.e
trusted.afr.gv0-client-0 for the 1st brick, trusted.afr.gv0-client-1 for the
2nd brick etc). This is not possible in AFR-v2 (i.e. glusterfs-3.6 onwards),
where each brick can have xattrs only blaming the other brick if some I/O
fails.

You could retest by setting something like this:

(1)
1st brick: set trusted.afr.gv0-client-1 and trusted.afr.gv0-client-2
2nd brick: set trusted.afr.gv0-client-0 and trusted.afr.gv0-client-2
3rd brick: set trusted.afr.gv0-client-0 and trusted.afr.gv0-client-1.

Then things should work. 

(2) Alternatively, you can also bring bricks up/down while I/O is going on. But
for replica-3 it is difficult to cause split-brain by the up/down method (works
fine for replica 2 ).

I'm leaving a need-info for you to test and see if you find any issues with
approaches (1) or (2).


Also, if you are able to hit the state where each brick blames itself (like in
comment #4) without manually setting the xattrs, please raise a bug for it.

Thanks again,
Ravi

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=s0jKhv0o6J&a=cc_unsubscribe


More information about the Bugs mailing list