[Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks
wongmlb at gmail.com
Sat Nov 12 01:10:15 UTC 2016
Have anyone encounter this behavior?
Running 3.7.16 from centos-gluster37, on CentOS 7.2 with NFS-Ganesha 2.3.0.
VMs are running fine without problems and with Sharding on. However, when i
either do a "add-brick" or "remove-brick start force". VM files will then
be corrupted, and the VM will not be able to boot anymore.
So far, as i access files through regular NFS, all regular files, or
directories seems to be accessible fine. I am not sure if this somehow
relate to bug1318136, but any help will be appreciated. Or, m i missing any
settings? Below is the vol info of gluster volume.
Volume Name: nfsvol1
Volume ID: 06786467-4c8a-48ad-8b1f-346aa8342283
Number of Bricks: 2 x 2 = 4
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users