[Gluster-users] managing split brain in 3.3
samuel
samu60 at gmail.com
Mon Jun 25 11:04:46 UTC 2012
Hi all,
We've been using gluster 3.2.X without much issue and we were trying next
version (3.3) compiled from sources on a ubuntu 12.04 server:
glusterfs 3.3.0 built on Jun 7 2012 11:19:51
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.
We're using a replicated distributed architecture with 8 nodes in a
2-replica configuration.
On the client side we're using gluster native libraries to mount the
gluster volume and recently we found an issue with 1 file
[2012-06-25 14:58:22.161036] W
[afr-self-heal-data.c:831:afr_lookup_select_read_child_by_txn_type]
0-cloud-replicate-2:$FILE: Possible split-brain
[2012-06-25 14:58:22.161098] W
[afr-common.c:1226:afr_detect_self_heal_by_lookup_status]
0-cloud-replicate-2: split brain detected during lookup of $FILE
[2012-06-25 14:58:22.161881] E
[afr-self-heal-common.c:2156:afr_self_heal_completion_cbk]
0-cloud-replicate-2: background data gfid self-heal failed on $FILE
I located the 2 bricks (servers) were the file was located, and the file
was ok in both nodes as expected. I tried to delete both the file and the
hard link in one node, perform a self-healing on the client, and the file
was recreated in the missing node but the file was not yet accessible from
the client.
I made the same procedure on the other node (delete file and hard link) and
launch self-healing and the file is not yet accessible.
Is there any guide or procedure to handle split brains on 3.3?
Thanks in advance,
Samuel.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120625/977a4418/attachment.html>
More information about the Gluster-users
mailing list