[Gluster-users] Gluster 3.8.10 rebalance VMs corruption

Krutika Dhananjay kdhananj at redhat.com
Sun Mar 19 04:53:25 UTC 2017


On Sat, Mar 18, 2017 at 11:15 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:

> Krutika, it wasn't an attack directly to you.
> It wasn't an attack at all.
>

> Gluster is a "SCALE-OUT" software defined storage, the folllowing is
> wrote in the middle of the homepage:
> "GlusterFS is a scalable network filesystem"
>
> So, scaling a cluster is one of the primary goal of gluster.
>
> A critical bug that prevent gluster from being scaled without loosing
> data was discovered 1 year ago, and took 1 year to be fixed.
>

> If gluster isn't able to ensure data consistency when doing it's
> primary role, scaling up a storage, i'm sorry but it can't be
> considered "enterprise" ready or production ready.
>

That's not entirely true. VM use-case is just one of the many workloads
users
use Gluster for. I think I've clarified this before. The bug was in
dht-shard interaction.
And shard is *only* supported in VM use-case as of today. This means that
scaling out has been working fine on all but the VM use-case.
That doesn't mean that Gluster is not production-ready. At least users
who've deployed Gluster
in non-VM use-cases haven't complained of add-brick not working in the
recent past.


-Krutika


> Maybe SOHO for small offices or home users, but in enterprises, data
> consistency and reliability is the most important thing and gluster
> isn't able to guarantee this even
> doing a very basic routine procedure that should be considered as the
> basis of the whole gluster project (as wrote on gluster's homepage)
>
>
> 2017-03-18 14:21 GMT+01:00 Krutika Dhananjay <kdhananj at redhat.com>:
> >
> >
> > On Sat, Mar 18, 2017 at 3:18 PM, Gandalf Corvotempesta
> > <gandalf.corvotempesta at gmail.com> wrote:
> >>
> >> 2017-03-18 2:09 GMT+01:00 Lindsay Mathieson <
> lindsay.mathieson at gmail.com>:
> >> > Concerning, this was supposed to be fixed in 3.8.10
> >>
> >> Exactly. https://bugzilla.redhat.com/show_bug.cgi?id=1387878
> >> Now let's see how much time they require to fix another CRITICAL bug.
> >>
> >> I'm really curious.
> >
> >
> > Hey Gandalf!
> >
> > Let's see. There have been plenty of occasions where I've sat and worked
> on
> > users' issues on weekends.
> > And then again, I've got a life too outside of work (or at least I'm
> > supposed to), you know.
> > (And hey you know what! Today is Saturday and I'm sitting here and
> > responding to your mail and collecting information
> > on Mahdi's issue. Nobody asked me to look into it. I checked the mail
> and I
> > had a choice to ignore it and not look into it until Monday.)
> >
> > Is there a genuine problem Mahdi is facing? Without a doubt!
> >
> > Got a constructive feedback to give? Please do.
> > Do you want to give back to the community and help improve GlusterFS?
> There
> > are plenty of ways to do that.
> > One of them is testing out the releases and providing feedback. Sharding
> > wouldn't have worked today, if not for Lindsay's timely
> > and regular feedback in several 3.7.x releases.
> >
> > But this kind of criticism doesn't help.
> >
> > Also, spending time on users' issues is only one of the many
> > responsibilities we have as developers.
> > So what you see on mailing lists is just the tip of the iceberg.
> >
> > I have personally tried several times to recreate the add-brick bug on 3
> > machines I borrowed from Kaleb. I haven't had success in recreating it.
> > Reproducing VM-related bugs, in my experience, wasn't easy. I don't use
> > Proxmox. Lindsay and Kevin did. There are a myriad qemu options used when
> > launching vms. Different VM management projects (ovirt/Proxmox) use
> > different defaults for these options. There are too many variables to be
> > considered
> > when debugging or trying to simulate the users' test.
> >
> > It's why I asked for Mahdi's help before 3.8.10 was out for feedback on
> the
> > fix:
> > http://lists.gluster.org/pipermail/gluster-users/2017-
> February/030112.html
> >
> > Alright. That's all I had to say.
> >
> > Happy weekend to you!
> >
> > -Krutika
> >
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-users
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170319/d224a9cd/attachment.html>


More information about the Gluster-users mailing list