[Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

Kevin Lemonnier lemonnierk at ulrar.net
Sat Nov 12 11:52:48 UTC 2016


> 
>    Having to create multiple cluster is not a solution and is much more
>    expansive.
>    And if you corrupt data from a single cluster you still have issues
> 

Sure, but thinking about it later we realised that it might be for the better.
I believe when sharding is enabled the shards will be dispersed across all the
replica sets, making it that losing a replica set will kill all your VMs.

Imagine a 16x3 volume for example, losing 2 bricks could bring the whole thing
down if they happen to be in the same replica set. (I might be wrong about the
way gluster disperse shards, it's my understanding only, never had the chance
to test it).
With multiple small clusters, we have the same disk space in the end but not
that problem, it's a bit more annoying to manage but for now that's allright.

> 
>    I'm also subscribed to moosefs and lizardfs mailing list and I don't
>    recall any single data corruption/data loss event
> 

Never used those, might be just because there are less users ? Really have no idea,
maybe you are right.

>    If you change the shard size on a populated cluster,A  you break all
>    existing data.

Not really shocked there. Guess the cli should warn you when you try re-setting
the option though, that would be nice.

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Digital signature
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161112/b94ce3c9/attachment.sig>


More information about the Gluster-users mailing list