<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Mar 18, 2017 at 11:15 PM, Gandalf Corvotempesta <span dir="ltr"><<a href="mailto:gandalf.corvotempesta@gmail.com" target="_blank">gandalf.corvotempesta@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Krutika, it wasn't an attack directly to you.<br>
It wasn't an attack at all. <br></blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Gluster is a "SCALE-OUT" software defined storage, the folllowing is<br>
wrote in the middle of the homepage:<br>
"GlusterFS is a scalable network filesystem"<br>
<br>
So, scaling a cluster is one of the primary goal of gluster.<br>
<br>
A critical bug that prevent gluster from being scaled without loosing<br>
data was discovered 1 year ago, and took 1 year to be fixed. <br></blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
If gluster isn't able to ensure data consistency when doing it's<br>
primary role, scaling up a storage, i'm sorry but it can't be<br>
considered "enterprise" ready or production ready.<br></blockquote><div><br></div><div>That's not entirely true. VM use-case is just one of the many workloads users<br></div><div>use Gluster for. I think I've clarified this before. The bug was in dht-shard interaction.<br></div><div>And shard is *only* supported in VM use-case as of today. This means that<br></div><div>scaling out has been working fine on all but the VM use-case.<br></div><div>That doesn't mean that Gluster is not production-ready. At least users who've deployed Gluster<br></div><div>in non-VM use-cases haven't complained of add-brick not working in the recent past.<br><br><br></div><div>-Krutika<br> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Maybe SOHO for small offices or home users, but in enterprises, data<br>
consistency and reliability is the most important thing and gluster<br>
isn't able to guarantee this even<br>
doing a very basic routine procedure that should be considered as the<br>
basis of the whole gluster project (as wrote on gluster's homepage)<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
2017-03-18 14:21 GMT+01:00 Krutika Dhananjay <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>>:<br>
><br>
><br>
> On Sat, Mar 18, 2017 at 3:18 PM, Gandalf Corvotempesta<br>
> <<a href="mailto:gandalf.corvotempesta@gmail.com">gandalf.corvotempesta@gmail.<wbr>com</a>> wrote:<br>
>><br>
>> 2017-03-18 2:09 GMT+01:00 Lindsay Mathieson <<a href="mailto:lindsay.mathieson@gmail.com">lindsay.mathieson@gmail.com</a>>:<br>
>> > Concerning, this was supposed to be fixed in 3.8.10<br>
>><br>
>> Exactly. <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1387878" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/<wbr>show_bug.cgi?id=1387878</a><br>
>> Now let's see how much time they require to fix another CRITICAL bug.<br>
>><br>
>> I'm really curious.<br>
><br>
><br>
> Hey Gandalf!<br>
><br>
> Let's see. There have been plenty of occasions where I've sat and worked on<br>
> users' issues on weekends.<br>
> And then again, I've got a life too outside of work (or at least I'm<br>
> supposed to), you know.<br>
> (And hey you know what! Today is Saturday and I'm sitting here and<br>
> responding to your mail and collecting information<br>
> on Mahdi's issue. Nobody asked me to look into it. I checked the mail and I<br>
> had a choice to ignore it and not look into it until Monday.)<br>
><br>
> Is there a genuine problem Mahdi is facing? Without a doubt!<br>
><br>
> Got a constructive feedback to give? Please do.<br>
> Do you want to give back to the community and help improve GlusterFS? There<br>
> are plenty of ways to do that.<br>
> One of them is testing out the releases and providing feedback. Sharding<br>
> wouldn't have worked today, if not for Lindsay's timely<br>
> and regular feedback in several 3.7.x releases.<br>
><br>
> But this kind of criticism doesn't help.<br>
><br>
> Also, spending time on users' issues is only one of the many<br>
> responsibilities we have as developers.<br>
> So what you see on mailing lists is just the tip of the iceberg.<br>
><br>
> I have personally tried several times to recreate the add-brick bug on 3<br>
> machines I borrowed from Kaleb. I haven't had success in recreating it.<br>
> Reproducing VM-related bugs, in my experience, wasn't easy. I don't use<br>
> Proxmox. Lindsay and Kevin did. There are a myriad qemu options used when<br>
> launching vms. Different VM management projects (ovirt/Proxmox) use<br>
> different defaults for these options. There are too many variables to be<br>
> considered<br>
> when debugging or trying to simulate the users' test.<br>
><br>
> It's why I asked for Mahdi's help before 3.8.10 was out for feedback on the<br>
> fix:<br>
> <a href="http://lists.gluster.org/pipermail/gluster-users/2017-February/030112.html" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>pipermail/gluster-users/2017-<wbr>February/030112.html</a><br>
><br>
> Alright. That's all I had to say.<br>
><br>
> Happy weekend to you!<br>
><br>
> -Krutika<br>
><br>
>> ______________________________<wbr>_________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
><br>
><br>
</div></div></blockquote></div><br></div></div>