[Gluster-users] Again about the healing process

Service Mail ciclopeblu at gmail.com
Mon Dec 26 09:46:46 UTC 2016


Hello,

I have a cluster with 3 nodes with some replica volumes. I have restart one
of the brick while still writing to the others so that I can test the
healing (self healing is on). After few hrs the healing seems to be stopped
but if I check the status, after many hours,  I still can see the following:

[root at brick-a ~]# gluster volume heal backup info
Brick brick-a:/pool/backup
<gfid:670563bf-76da-41bc-a4b8-4bf5e05d7a4e>
<gfid:562106de-2768-4c66-9d9a-3e5da19ec3f6>
<gfid:2084e791-b3f8-4234-989b-6f3dfa395e6b>
<gfid:981b9503-efc0-4617-864a-7ce0768524e5>
<gfid:394201a8-d551-491b-81b9-89044c11076f>
<gfid:863335e0-79ec-46d4-8704-40e33e248653>
<gfid:a8342c60-76d9-4dc9-8e3e-a5c97a8f843a>
<gfid:9517abe3-a466-404c-8987-164fbd0c01ad>
<gfid:c075be3f-4f1b-4235-9172-1ad34bd273d1>
Status: Connected
Number of entries: 9

Brick brick-b:/pool/backup
Status: Connected

Far as I understand, this means some data on brick b (the one I have
restarted) is not healthy

I have tried "#gluster volume heal backup full" however the info command
still list those files as corrupted.

But the most concerning thing to me is that checking the directory
structure on all nodes they look different:

[root at brick-a backup]# du * -sh
301G data
1.9T data2
1.9T data3

[root at brick-b backup]# du * -sh 304G data 1.4T data2 1.7T data3

Could be that the 9 entries listed with the "info" commands are big files
being "synced" to the others bricks? how can I verified this?


Can you help?

Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161226/1c4d0bdb/attachment.html>


More information about the Gluster-users mailing list