[Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

Krutika Dhananjay kdhananj at redhat.com
Sat Nov 12 02:29:46 UTC 2016


Hi,

Yes, this has been reported before by Lindsay Mathieson and Kevin Lemonnier
on this list.
We just found one issue with replace-brick that we recently fixed.

In your case, are you doing add-brick and changing the replica count (say
from 2 -> 3) or are you adding
"replica-count" number of bricks every time?

-Krutika

On Sat, Nov 12, 2016 at 6:40 AM, ML Wong <wongmlb at gmail.com> wrote:

> Have anyone encounter this behavior?
>
> Running 3.7.16 from centos-gluster37, on CentOS 7.2 with NFS-Ganesha
> 2.3.0. VMs are running fine without problems and with Sharding on. However,
> when i either do a "add-brick" or "remove-brick start force". VM files will
> then be corrupted, and the VM will not be able to boot anymore.
>
> So far, as i access files through regular NFS, all regular files, or
> directories seems to be accessible fine. I am not sure if this somehow
> relate to bug1318136, but any help will be appreciated. Or, m i missing any
> settings? Below is the vol info of gluster volume.
>
> Volume Name: nfsvol1
> Type: Distributed-Replicate
> Volume ID: 06786467-4c8a-48ad-8b1f-346aa8342283
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: stor4:/data/brick1/nfsvol1
> Brick2: stor5:/data/brick1/nfsvol1
> Brick3: stor1:/data/brick1/nfsvol1
> Brick4: stor2:/data/brick1/nfsvol1
> Options Reconfigured:
> features.shard-block-size: 64MB
> features.shard: on
> ganesha.enable: on
> features.cache-invalidation: off
> nfs.disable: on
> performance.readdir-ahead: on
> nfs-ganesha: enable
> cluster.enable-shared-storage: enable
>
> thanks,
> Melvin
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161112/37e7350b/attachment.html>


More information about the Gluster-users mailing list