[Gluster-users] Split-brain

Joe Julian joe at julianfamily.org
Thu Feb 20 23:29:20 UTC 2014


On 02/20/2014 02:43 PM, William Kwan wrote:
> Hi all,
>
> Running glusterfs-3.4.2-1.el6.x86_6 on centos6.5
>
> Due to some smart people screw up my network connection on the nodes 
> for don't know how long. I found that I have my GlusterFS volume in 
> split-brain.  I googled and found different way to clean this.  I need 
> some extra help on this.
>
> # gluster volume heal kvm1 info split-brain
> Gathering Heal info on volume kvm1 has been successful
>
> Brick mgmt1:/gluster/brick1
> Number of entries: 21
> at                    path on brick
> -----------------------------------
> 2014-02-20 22:33:41 
> /d058a735-0fca-430a-a3d7-cf0a77097e5d/images/714c56a8-db1d-42d5-bf76-869bd6c87eef/0ea0a280-4c2c-48ab-ad95-8cb48e6cf02b
> 2014-02-20 22:33:41 
> /d058a735-0fca-430a-a3d7-cf0a77097e5d/images/20b728b6-dd39-4d2e-a5c0-2dee22df6e95/a6a9b083-b04c-4ac8-86cb-ed4eb697c2c3
> 2014-02-20 22:33:41 /d058a735-0fca-430a-a3d7-cf0a77097e5d/dom_md/ids
> ... <truncated>
>
> Brick mgmt2:/gluster/brick1
> Number of entries: 28
> at                    path on brick
> -----------------------------------
> 2014-02-20 22:37:38 /d058a735-0fca-430a-a3d7-cf0a77097e5d/dom_md/ids
> 2014-02-20 22:37:38 
> /d058a735-0fca-430a-a3d7-cf0a77097e5d/images/714c56a8-db1d-42d5-bf76-869bd6c87eef/0ea0a280-4c2c-48ab-ad95-8cb48e6cf02b
> 2014-02-20 22:37:38 
> /d058a735-0fca-430a-a3d7-cf0a77097e5d/images/20b728b6-dd39-4d2e-a5c0-2dee22df6e95/a6a9b083-b04c-4ac8-86cb-ed4eb697c2c3
> 2014-02-20 22:27:38 /d058a735-0fca-430a-a3d7-cf0a77097e5d/dom_md/ids
> 2014-02-20 22:27:38 /d058a735-0fca-430a-a3d7-cf0
> ... <truncated>
>
>
> 1. what's the best way?
Here's the write-up I did about split-brain: 
http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
>
> 2.gluster volume heal doesnt' really save this, right?
No, the nature of split-brain is such that there is no automated way to 
recover from it.
>
> 3. kind of shooting from the dark as I can't see the data content. The 
> volume is holding VM images.  Picking the latest copies should be good?
That does seem a reasonably safe assumption, especially if your vm's are 
cattle instead of kittens 
<http://etherealmind.com/cattle-vs-kittens-on-cloud-platforms-no-one-hears-the-kittens-dying/>.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140220/30c9b65d/attachment.html>


More information about the Gluster-users mailing list