[Gluster-users] ... i was able to produce a split brain...

Ml Ml mliebherr99 at googlemail.com
Wed Jan 28 08:32:27 UTC 2015


Can anyone help me here please?

On Tue, Jan 27, 2015 at 7:09 PM, Ml Ml <mliebherr99 at googlemail.com> wrote:
> Hello List,
>
> i was able to produce a split brain:
>
> [root at ovirt-node03 splitmount]# gluster volume heal RaidVolB info
> Brick ovirt-node03.example.local:/raidvol/volb/brick/
> <gfid:1c15d0cb-1cca-4627-841c-395f7b712f73>
> Number of entries: 1
>
> Brick ovirt-node04.example.local:/raidvol/volb/brick/
> /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
> Number of entries: 1
>
>
>
>
> I want to either take the file from node03 or node04. i really don’t
> mind. Can i not just tell gluster that it should use one node as the
> „current“ one?
>
> Like with DRBD: drbdadm connect --discard-my-data <resource>
>
> Is there a similar way with gluster?
>
>
>
> Thanks,
> Mario
>
> # rpm -qa | grep gluster
> ---------------------------------------------------
> glusterfs-fuse-3.6.2-1.el6.x86_64
> glusterfs-server-3.6.2-1.el6.x86_64
> glusterfs-libs-3.6.2-1.el6.x86_64
> glusterfs-3.6.2-1.el6.x86_64
> glusterfs-cli-3.6.2-1.el6.x86_64
> glusterfs-rdma-3.6.2-1.el6.x86_64
> vdsm-gluster-4.14.6-0.el6.noarch
> glusterfs-api-3.6.2-1.el6.x86_64


More information about the Gluster-users mailing list