[Gluster-users] DIRECT_IO_TEST in split brain

Anuradha Talur atalur at redhat.com
Thu Jun 11 05:49:56 UTC 2015



----- Original Message -----
> From: paf1 at email.cz
> To: "GLUSTER-USERS" <gluster-users at gluster.org>
> Sent: Wednesday, June 10, 2015 8:44:51 PM
> Subject: [Gluster-users] DIRECT_IO_TEST in split brain
> 
> hello,
> pls , how to eliminate this split brain on
> - centos 7.1
> - glusterfs-3.7.1-1.el7.x86_64

The following link will help you understand various ways in which split-brain can be resolved.
https://github.com/gluster/glusterfs/blob/master/doc/features/heal-info-and-split-brain-resolution.md
Could you go through it and see if it helps?
> 
> # gluster volume heal R2 info
> Brick cl1:/R2/R2/
> /__DIRECT_IO_TEST__ - Is in split-brain
> 
> Number of entries: 1
> 
> Brick cl3:/R2/R2/
> /__DIRECT_IO_TEST__ - Is in split-brain
> 
> Number of entries: 1
> 
> ---------------------
> # gluster volume status R2
> Status of volume: R2
> Gluster process TCP Port RDMA Port Online Pid
> ------------------------------------------------------------------------------
> Brick 10.10.1.67:/R2/R2 49152 0 Y 3617
> Brick 10.10.1.82:/R2/R2 49152 0 Y 3337
> NFS Server on localhost 2049 0 Y 3605
> Self-heal Daemon on localhost N/A N/A Y 3610
> NFS Server on 10.10.1.82 2049 0 Y 3344
> Self-heal Daemon on 10.10.1.82 N/A N/A Y 3349
> NFS Server on 10.10.1.69 2049 0 Y 3432
> Self-heal Daemon on 10.10.1.69 N/A N/A Y 3440
> 
> Task Status of Volume R2
> ------------------------------------------------------------------------------
> There are no active volume tasks
> 
> [root at cl1 ~]# gluster volume info R2
> 
> Volume Name: R2
> Type: Replicate
> Volume ID: 6c30118d-8f71-4593-9607-d0ded7401783
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.10.1.67:/R2/R2
> Brick2: 10.10.1.82:/R2/R2
> Options Reconfigured:
> cluster.quorum-count: 1
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.server-quorum-type: none
> cluster.quorum-type: fixed
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> cluster.self-heal-daemon: enable
> 
> regs.Pa.
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-- 
Thanks,
Anuradha.


More information about the Gluster-users mailing list