[Gluster-users] gluster heal entry reappears

Markus Stockhausen stockhausen at collogia.de
Sun May 28 17:01:41 UTC 2017


Hi,

I'm fairly new to gluster and quite happy with it. We are using it in an OVirt
environment that stores its VM images in the gluster. Setup is as follows and
Clients mount the volume with gluster native fuse protocol.

3 storage nodes: Centos 7, Gluster 3.8.12 (managed by me), 2 bricks each
5 virtualization nodes: Centos 7, Gluster 3.8.12 (managed by OVirt engine)

After todays reboot of one of the storage nodes the recovery did not finish
successfully. The state of one brick remained in:

[root at cfiler301 dom_md]# gluster volume heal gluster1 info
...
Brick cfilers201:/var/data/brick1/brick
/b1de7818-020b-4f47-938f-f3ebb51836a3/dom_md/ids
Status: Connected
Number of entries: 1
...

The above file is used by sanlock runing on the OVirt nodes to handle VM
image locking. Issuing a manual heal with "gluster volume heal gluster1" fixed
the problem but the unsynced entry reappeared a few seconds later.

My question: Should this situation be recovered automatically and if yes
what might be the culprit?

Best regards.

Markus

P.S. I finally fixed the issue by remounting the filesystems (on Ovirt nodes) and
so sanlock was restarted too.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170528/ec11df17/attachment.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: InterScan_Disclaimer.txt
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170528/ec11df17/attachment.txt>


More information about the Gluster-users mailing list