[Gluster-users] glusterfs 3.6.5 and selfheal

Roman romeo.r at gmail.com
Fri May 18 18:26:52 UTC 2018


Am running glusterfs server with replicated volume for qemu-kvm (proxmox)
VM storerage which is mounted using libgfapi module. The servers are
running network with mtu 9000 and client is not (yet).
The question I've got is this:
Is it normal to see this kind of an output: gluster volume heal
HA-100G-POC-PVE info

Brick stor1:/exports/HA-100G-POC-PVE/100G/
/images/100/vm-100-disk-1.raw - Possibly undergoing heal

Number of entries: 1

Brick stor2:/exports/HA-100G-POC-PVE/100G/
/images/100/vm-100-disk-1.raw - Possibly undergoing heal

This happens pretty often but with different disk images on different
replicated volumes. I mean I'm not sure if it is wrong or right, just
curious of this.

Best regards,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180518/0c518c52/attachment.html>

More information about the Gluster-users mailing list