[Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks
Krutika Dhananjay
kdhananj at redhat.com
Mon Nov 14 10:59:49 UTC 2016
Which data corruption issue is this? Could you point me to the bug report
on bugzilla?
-Krutika
On Sat, Nov 12, 2016 at 4:28 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Il 12 nov 2016 10:21, "Kevin Lemonnier" <lemonnierk at ulrar.net> ha scritto:
> > We've had a lot of problems in the past, but at least for us 3.7.12 (and
> 3.7.15)
> > seems to be working pretty well as long as you don't add bricks. We
> started doing
> > multiple little clusters and abandonned the idea of one big cluster, had
> no
> > issues since :)
> >
>
> Well, adding bricks could be usefull... :)
>
> Having to create multiple cluster is not a solution and is much more
> expansive.
> And if you corrupt data from a single cluster you still have issues
>
> I think would be better to add less features and focus more to stability.
> In a software defined storage, stability and consistency are the most
> important things
>
> I'm also subscribed to moosefs and lizardfs mailing list and I don't
> recall any single data corruption/data loss event
>
> In gluster, after some days of testing I've found a huge data corruption
> issue that is still unfixed on bugzilla.
> If you change the shard size on a populated cluster, you break all
> existing data.
> Try to do this on a cluster with working VMs and see what happens....
> a single cli command break everything and is still unfixed.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161114/93babb72/attachment.html>
More information about the Gluster-users
mailing list