[Gluster-users] question about gluster volume heal info split-brain

songxin songxin_1980 at 126.com
Tue Mar 22 09:37:48 UTC 2016


Hi,
I have a quesition about heal info split-brain.
I know that the gfid mismatch is a kind of split-brain and the parent directory should be show split-brain.
In my case the "gluster volume heal info split-brain"  show that no file is split-brain, though same filename has diffetent gfid on two bricks of a replicate volume.
And access the file it will who Input/output error.




precondition:
2.A node ip:10.32.0.48;
3.B node ip:10.32.1.144
4.A brick: /opt/lvmdir/c2/brick on A node
5.B brick: /opt/lvmdir/c2/brick on B node


reproduce:
1.create a replicate volume use two brick, A brick and B brick                  (on A node)
2.start volume                                                                                            (on A node) 
2.mount  mount point (/mnt/c) on the volume                                            (on A node) 
3.mount  mount point (/mnt/c) on the volume                                            (on B node) 
4.access the mount point                                                                           (A node and B node)
5.reboot B node
6.start glusterd                                                                                           (B node)
7.remove B brick from replicate volume                                                    (A node)
8.peer detach 10.32.1.144                                                                         (A node)
9.peer probe 10.32.1.144                                                                          (A node)
10.add B brick to  volume                                                                          (A node)
11.after some time, go to step 5






logs on A node:


stat: cannot stat '/mnt/c/public_html/cello/ior_files/nameroot.ior': Input/output error


getfattr -d -m . -e hex opt/lvmdir/c2/brick/public_html/cello/ior_files/nameroot.ior
# file: opt/lvmdir/c2/brick/public_html/cello/ior_files/nameroot.ior
trusted.afr.dirty=0x000000000000000000000000
trusted.bit-rot.version=0x000000000000000256e812da0007bf13
trusted.gfid=0xc18f775d94de42879235d1331d85c860


getfattr -d -m . -e hex opt/lvmdir/c2/brick/public_html/cello/ior_files
# file: opt/lvmdir/c2/brick/public_html/cello/ior_files
trusted.afr.c_glusterfs-client-1=0x000000000000000000000000
trusted.afr.c_glusterfs-client-207=0x000000000000000000000002
trusted.afr.c_glusterfs-client-209=0x000000000000000000000000
trusted.afr.c_glusterfs-client-215=0x000000000000000000000000
trusted.afr.c_glusterfs-client-39=0x000000000000000000000000
trusted.afr.c_glusterfs-client-47=0x000000000000000000000000
trusted.afr.c_glusterfs-client-49=0x000000000000000000000002
trusted.afr.c_glusterfs-client-51=0x000000000000000000000000
trusted.afr.dirty=0x000000000000000000000000
trusted.gfid=0xd9cd3be03fa44d1e8a8da8523535ef0a
trusted.glusterfs.dht=0x000000010000000000000000ffffffff




logs on B node:
stat: cannot stat '/mnt/c/public_html/cello/ior_files/nameroot.ior': Input/output error


getfattr -d -m . -e hex opt/lvmdir/c2/brick/public_html/cello/ior_files/nameroot.ior 
# file: opt/lvmdir/c2/brick/public_html/cello/ior_files/nameroot.ior
trusted.bit-rot.version=0x000000000000000256e813c50008b4e2
trusted.gfid=0x32145e0378864767989335f37c108409


getfattr -d -m . -e hex opt/lvmdir/c2/brick/public_html/cello/ior_files 
# file: opt/lvmdir/c2/brick/public_html/cello/ior_files
trusted.afr.c_glusterfs-client-112=0x000000000000000000000000
trusted.afr.c_glusterfs-client-116=0x000000000000000000000000
trusted.afr.c_glusterfs-client-128=0x000000000000000000000000
trusted.afr.c_glusterfs-client-130=0x000000000000000000000000
trusted.afr.c_glusterfs-client-150=0x000000000000000000000000
trusted.afr.c_glusterfs-client-164=0x000000000000000000000000
trusted.afr.c_glusterfs-client-166=0x000000000000000000000000
trusted.afr.c_glusterfs-client-194=0x000000000000000000000000
trusted.afr.c_glusterfs-client-196=0x000000000000000000000000
trusted.afr.c_glusterfs-client-200=0x000000000000000000000000
trusted.afr.c_glusterfs-client-224=0x000000000000000000000000
trusted.afr.c_glusterfs-client-26=0x000000000000000000000000
trusted.afr.c_glusterfs-client-36=0x000000000000000000000000
trusted.afr.c_glusterfs-client-38=0x000000000000000000000000
trusted.afr.c_glusterfs-client-40=0x000000000000000000000000
trusted.afr.c_glusterfs-client-50=0x000000000000000000000000
trusted.afr.c_glusterfs-client-54=0x000000000000000000000000
trusted.afr.c_glusterfs-client-58=0x000000000000000000000002
trusted.afr.c_glusterfs-client-64=0x000000000000000000000000
trusted.afr.c_glusterfs-client-66=0x000000000000000000000000
trusted.afr.c_glusterfs-client-70=0x000000000000000000000000
trusted.afr.c_glusterfs-client-76=0x000000000000000000000000
trusted.afr.c_glusterfs-client-84=0x000000000000000000000000
trusted.afr.c_glusterfs-client-90=0x000000000000000000000000
trusted.afr.c_glusterfs-client-98=0x000000000000000000000000
trusted.afr.dirty=0x000000000000000000000000
trusted.gfid=0xd9cd3be03fa44d1e8a8da8523535ef0a
trusted.glusterfs.dht=0x000000010000000000000000ffffffff


Thanks,
Xin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160322/54fc13f5/attachment.html>


More information about the Gluster-users mailing list