[Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

ML Wong wongmlb at gmail.com
Sat Nov 12 01:10:15 UTC 2016


Have anyone encounter this behavior?

Running 3.7.16 from centos-gluster37, on CentOS 7.2 with NFS-Ganesha 2.3.0.
VMs are running fine without problems and with Sharding on. However, when i
either do a "add-brick" or "remove-brick start force". VM files will then
be corrupted, and the VM will not be able to boot anymore.

So far, as i access files through regular NFS, all regular files, or
directories seems to be accessible fine. I am not sure if this somehow
relate to bug1318136, but any help will be appreciated. Or, m i missing any
settings? Below is the vol info of gluster volume.

Volume Name: nfsvol1
Type: Distributed-Replicate
Volume ID: 06786467-4c8a-48ad-8b1f-346aa8342283
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: stor4:/data/brick1/nfsvol1
Brick2: stor5:/data/brick1/nfsvol1
Brick3: stor1:/data/brick1/nfsvol1
Brick4: stor2:/data/brick1/nfsvol1
Options Reconfigured:
features.shard-block-size: 64MB
features.shard: on
ganesha.enable: on
features.cache-invalidation: off
nfs.disable: on
performance.readdir-ahead: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable

thanks,
Melvin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161111/dd6367dd/attachment.html>


More information about the Gluster-users mailing list