[Gluster-users] heal and heal full do not heal files, how to manually heal them?
Yandong Yao
yydzero at gmail.com
Sat Jan 18 15:48:35 UTC 2014
BTW: This is the output of volume info and status.
u1 at u1-virtual-machine:~$ sudo gluster volume info
Volume Name: mysqldata
Type: Replicate
Volume ID: 27e6161b-d2d0-4369-8ef0-acf18532af73
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.53.218:/data/gv0/brick1/mysqldata
Brick2: 192.168.53.221:/data/gv0/brick1/mysqldata
u1 at u1-virtual-machine:~$ sudo gluster volume status
Status of volume: mysqldata
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.53.218:/data/gv0/brick1/mysqldata 49154 Y 2071
Brick 192.168.53.221:/data/gv0/brick1/mysqldata 49153 Y 2170
NFS Server on localhost 2049 Y 2066
Self-heal Daemon on localhost N/A Y 2076
NFS Server on 192.168.53.221 2049 Y 2175
Self-heal Daemon on 192.168.53.221 N/A Y 2180
There are no active volume tasks
2014/1/18 Yandong Yao <yydzero at gmail.com>
> Hi Guys,
>
> I am testing glusterfs and have configured replicated volume (replica=2 on
> two virtual machines), after play with the volume a while, there are
> un-consistent data reported by 'heal volname info':
>
> u1 at u1-virtual-machine:~$ sudo gluster volume heal mysqldata info
> Gathering Heal info on volume mysqldata has been successful
>
> Brick 192.168.53.218:/data/gv0/brick1/mysqldata
> Number of entries: 1
> <gfid:0ff1a4e1-b14c-41d6-826b-e749a4e6ec7f>
>
> Brick 192.168.53.221:/data/gv0/brick1/mysqldata
> Number of entries: 1
> /ibdata1
>
>
> *1) What does this means? Why one entry is file itself on one host, while
> another entry is gfid on another host? *
>
> *2) After a while (maybe 2 minutes), re-run heal info, and get following
> output. What happened behind the scene? Why the entry changes to file from
> gfid?*
>
> u1 at u1-virtual-machine:~$ sudo gluster volume heal mysqldata info
> Gathering Heal info on volume mysqldata has been successful
>
> Brick 192.168.53.218:/data/gv0/brick1/mysqldata
> Number of entries: 1
> /ibdata1
>
> Brick 192.168.53.221:/data/gv0/brick1/mysqldata
> Number of entries: 1
> /ibdata1
> u1 at u1-virtual-machine:~$ sudo gluster volume heal mysqldata info
> split-brain
> Gathering Heal info on volume mysqldata has been successful
>
> Brick 192.168.53.218:/data/gv0/brick1/mysqldata
> Number of entries: 0
>
> Brick 192.168.53.221:/data/gv0/brick1/mysqldata
> Number of entries: 0
>
> *3) I tried with both heal and heal full, while heal seems not work, I
> still get above output. How could I heal this case manually? Following is
> getfattr output.*
>
> u1 at u1-virtual-machine:~$ sudo getfattr -e hex -m . -d
> /data/gv0/brick1/mysqldata/ibdata1
> getfattr: Removing leading '/' from absolute path names
> # file: data/gv0/brick1/mysqldata/ibdata1
> trusted.afr.mysqldata-client-0=0x000000010000000000000000
> trusted.afr.mysqldata-client-1=0x000000010000000000000000
> trusted.gfid=0x0ff1a4e1b14c41d6826be749a4e6ec7f
>
>
> Any comments are welcome, and thanks very much in advance!
>
> Regards,
> Yandong
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140118/1e4888be/attachment.html>
More information about the Gluster-users
mailing list