[Gluster-devel] query about a split-brain problem found in glusterfs3.12.3
Ravishankar N
ravishankar at redhat.com
Thu Feb 8 03:55:50 UTC 2018
On 02/08/2018 07:16 AM, Zhou, Cynthia (NSB - CN/Hangzhou) wrote:
>
> Hi,
>
> Thanks for responding?
>
> If split-brain happen in such kind of test is reasonable, how to fix
> this split-brain situation?
>
If you are using replica 2, then there is no prevention. Once they
occur, you can resolve them using
http://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/
If you want to prevent split-brain, you would need to use replica 3 or
arbiter volume.
Regards,
Ravi
>
> *From:*Ravishankar N [mailto:ravishankar at redhat.com]
> *Sent:* Thursday, February 08, 2018 12:12 AM
> *To:* Zhou, Cynthia (NSB - CN/Hangzhou)
> <cynthia.zhou at nokia-sbell.com>; Gluster-devel at gluster.org
> *Subject:* Re: query about a split-brain problem found in glusterfs3.12.3
>
> On 02/07/2018 10:39 AM, Zhou, Cynthia (NSB - CN/Hangzhou) wrote:
>
> Hi glusterfs expert:
>
> Good day.
>
> Lately, we meet a glusterfs split brain problem in our env in
> /mnt/export/testdir. We start 3 ior process (IOR tool) from non-sn
> nodes, which is creating/removing files repeatedly in testdir.
> then we reboot sn nodes(sn0 and sn1) by sequence. Then we meet
> following problem.
>
> Do you have some comments on how this could happen? And how to fix
> it in this situation? Thanks!
>
>
> Is the problem that split-brain is happening? Is this a replica 2
> volume? If yes, then it looks like it is expected behavior?
> Regards
> Ravi
>
> gluster volume heal export info
> Brick sn-0.local:/mnt/bricks/export/brick
> Status: Connected
> Number of entries: 0
>
> Brick sn-1.local:/mnt/bricks/export/brick
> /testdir - Is in split-brain
>
> /testdir - Possibly undergoing heal
>
> Status: Connected
> Number of entries: 2
>
> wait for a while …..
>
> gluster volume heal export info
> Brick sn-0.local:/mnt/bricks/export/brick
> Status: Connected
> Number of entries: 0
>
> Brick sn-1.local:/mnt/bricks/export/brick
> /testdir - Possibly undergoing heal
>
> /testdir - Possibly undergoing heal
>
> and finally:
>
> [root at sn-0:/root <http://sn-0/root>]
> # gluster v heal export info
> Brick sn-0.local:/mnt/bricks/export/brick
> <http://local/mnt/bricks/export/brick>
> Status: Connected
> Number of entries: 0
>
> Brick sn-1.local:/mnt/bricks/export/brick
> <http://local/mnt/bricks/export/brick>
> /testdir - Is in split-brain
>
> Status: Connected
> Number of entries: 1
>
> [root at sn-0:/root <http://sn-0/root>]
>
> # getfattr -m .* -d -e hex /mnt/bricks/export/brick/testdir
>
> getfattr: Removing leading '/' from absolute path names
>
> # file: mnt/bricks/export/brick/testdir
>
> trusted.gfid=0x5622cff893b3484dbdb6a20a0edb0e77
>
> trusted.glusterfs.dht=0x000000010000000000000000ffffffff
>
> [root at sn-1:/root <http://sn-1/root>]
>
> # getfattr -m .* -d -e hex /mnt/bricks/export/brick/testdir
>
> getfattr: Removing leading '/' from absolute path names
>
> # file: mnt/bricks/export/brick/testdir
>
> trusted.afr.dirty=0x000000000000000000000001
>
> trusted.afr.export-client-0=0x000000000000000000000038
>
> trusted.gfid=0x5622cff893b3484dbdb6a20a0edb0e77
>
> trusted.glusterfs.dht=0x000000010000000000000000ffffffff
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180208/b38995e5/attachment-0001.html>
More information about the Gluster-devel
mailing list