[Gluster-users] heal and heal full do not heal files, how to manually heal them?
Yandong Yao
yydzero at gmail.com
Sun Jan 19 04:25:40 UTC 2014
Looks like the hardlink hidden file are inconsistent, how could I fix this
and why it happens?
*On machine 1:*
u1 at u1-virtual-machine:~$ sudo stat
/data/gv0/brick1/mysqldata/.glusterfs/0f/f1/0ff1a4e1-b14c-41d6-826b-e749a4e6ec7f
File:
‘/data/gv0/brick1/mysqldata/.glusterfs/0f/f1/0ff1a4e1-b14c-41d6-826b-e749a4e6ec7f’
Size: 79691776 Blocks: 155656 IO Block: 4096 regular file
Device: 811h/2065d Inode: 393231 Links: 2
Access: (0660/-rw-rw----) Uid: ( 999/ mysql) Gid: ( 1001/ mysql)
Access: 2014-01-18 23:09:47.567335000 +0800
Modify: 2014-01-18 23:11:48.690740114 +0800
Change: 2014-01-19 12:12:35.360826648 +0800
Birth: -
u1 at u1-virtual-machine:~$
u1 at u1-virtual-machine:~$
u1 at u1-virtual-machine:~$
u1 at u1-virtual-machine:~$ sudo getfattr -e hex -m . -d
/data/gv0/brick1/mysqldata/.glusterfs/0f/f1/0ff1a4e1-b14c-41d6-826b-e749a4e6ec7f
getfattr: Removing leading '/' from absolute path names
# file:
data/gv0/brick1/mysqldata/.glusterfs/0f/f1/0ff1a4e1-b14c-41d6-826b-e749a4e6ec7f
trusted.afr.mysqldata-client-0=0x0000000*1*0000000000000000
trusted.afr.mysqldata-client-1=0x0000000*1*0000000000000000
trusted.gfid=0x0ff1a4e1b14c41d6826be749a4e6ec7f
*On machine 2:*
u1 at u2-virtual-machine:/var/lib/glusterd/glustershd$ sudo stat
/data/gv0/brick1/mysqldata/.glusterfs/0f/f1/0ff1a4e1-b14c-41d6-826b-e749a4e6ec7f
File:
‘/data/gv0/brick1/mysqldata/.glusterfs/0f/f1/0ff1a4e1-b14c-41d6-826b-e749a4e6ec7f’
Size: 79691776 Blocks: 155656 IO Block: 4096 regular file
Device: 811h/2065d Inode: 131087 Links: 2
Access: (0660/-rw-rw----) Uid: ( 999/ mysql) Gid: ( 1001/ mysql)
Access: 2014-01-19 11:38:49.305700766 +0800
Modify: 2014-01-18 23:11:48.356419703 +0800
Change: 2014-01-19 12:12:35.237641763 +0800
Birth: -
u1 at u2-virtual-machine:/var/lib/glusterd/glustershd$ sudo getfattr -e hex -m
. -d
/data/gv0/brick1/mysqldata/.glusterfs/0f/f1/0ff1a4e1-b14c-41d6-826b-e749a4e6ec7f
getfattr: Removing leading '/' from absolute path names
# file:
data/gv0/brick1/mysqldata/.glusterfs/0f/f1/0ff1a4e1-b14c-41d6-826b-e749a4e6ec7f
trusted.afr.mysqldata-client-0=0x0000000*1*0000000000000000
trusted.afr.mysqldata-client-1=0x0000000*1*0000000000000000
trusted.gfid=0x0ff1a4e1b14c41d6826be749a4e6ec7f
2014/1/18 Yandong Yao <yydzero at gmail.com>
> BTW: This is the output of volume info and status.
>
> u1 at u1-virtual-machine:~$ sudo gluster volume info
>
> Volume Name: mysqldata
> Type: Replicate
> Volume ID: 27e6161b-d2d0-4369-8ef0-acf18532af73
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.53.218:/data/gv0/brick1/mysqldata
> Brick2: 192.168.53.221:/data/gv0/brick1/mysqldata
> u1 at u1-virtual-machine:~$ sudo gluster volume status
> Status of volume: mysqldata
> Gluster process Port Online Pid
>
> ------------------------------------------------------------------------------
> Brick 192.168.53.218:/data/gv0/brick1/mysqldata 49154 Y 2071
> Brick 192.168.53.221:/data/gv0/brick1/mysqldata 49153 Y 2170
> NFS Server on localhost 2049 Y 2066
> Self-heal Daemon on localhost N/A Y 2076
> NFS Server on 192.168.53.221 2049 Y 2175
> Self-heal Daemon on 192.168.53.221 N/A Y 2180
>
> There are no active volume tasks
>
>
> 2014/1/18 Yandong Yao <yydzero at gmail.com>
>
>> Hi Guys,
>>
>> I am testing glusterfs and have configured replicated volume (replica=2
>> on two virtual machines), after play with the volume a while, there are
>> un-consistent data reported by 'heal volname info':
>>
>> u1 at u1-virtual-machine:~$ sudo gluster volume heal mysqldata info
>> Gathering Heal info on volume mysqldata has been successful
>>
>> Brick 192.168.53.218:/data/gv0/brick1/mysqldata
>> Number of entries: 1
>> <gfid:0ff1a4e1-b14c-41d6-826b-e749a4e6ec7f>
>>
>> Brick 192.168.53.221:/data/gv0/brick1/mysqldata
>> Number of entries: 1
>> /ibdata1
>>
>>
>> *1) What does this means? Why one entry is file itself on one host,
>> while another entry is gfid on another host? *
>>
>> *2) After a while (maybe 2 minutes), re-run heal info, and get following
>> output. What happened behind the scene? Why the entry changes to file from
>> gfid?*
>>
>> u1 at u1-virtual-machine:~$ sudo gluster volume heal mysqldata info
>> Gathering Heal info on volume mysqldata has been successful
>>
>> Brick 192.168.53.218:/data/gv0/brick1/mysqldata
>> Number of entries: 1
>> /ibdata1
>>
>> Brick 192.168.53.221:/data/gv0/brick1/mysqldata
>> Number of entries: 1
>> /ibdata1
>> u1 at u1-virtual-machine:~$ sudo gluster volume heal mysqldata info
>> split-brain
>> Gathering Heal info on volume mysqldata has been successful
>>
>> Brick 192.168.53.218:/data/gv0/brick1/mysqldata
>> Number of entries: 0
>>
>> Brick 192.168.53.221:/data/gv0/brick1/mysqldata
>> Number of entries: 0
>>
>> *3) I tried with both heal and heal full, while heal seems not work, I
>> still get above output. How could I heal this case manually? Following is
>> getfattr output.*
>>
>> u1 at u1-virtual-machine:~$ sudo getfattr -e hex -m . -d
>> /data/gv0/brick1/mysqldata/ibdata1
>> getfattr: Removing leading '/' from absolute path names
>> # file: data/gv0/brick1/mysqldata/ibdata1
>> trusted.afr.mysqldata-client-0=0x000000010000000000000000
>> trusted.afr.mysqldata-client-1=0x000000010000000000000000
>> trusted.gfid=0x0ff1a4e1b14c41d6826be749a4e6ec7f
>>
>>
>> Any comments are welcome, and thanks very much in advance!
>>
>> Regards,
>> Yandong
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140119/f5b7caf7/attachment.html>
More information about the Gluster-users
mailing list