<div dir="ltr"><div dir="ltr"><br></div><div>Hi Strahil,</div><br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">WARNING: As you enabled sharding - NEVER DISABLE SHARDING, EVER !<br></blockquote><div><br></div><div>Thanks -- good to be reminded :)</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">>When you say they will not be optimal are you referring mainly to<br>
>performance considerations? We did plenty of testing, and in terms of<br>
>performance didn't have issues even with I/O intensive workloads (using<br>
>SSDs, I had issues with spinning disks).<br>
<br>
Yes, the client side has to connect to 6 bricks (4+2) at a time and calculate the data in order to obtain the necessary information.Same is valid for writing.<br>
If you need to conserve space, you can test VDO without compression (of even with it).<br></blockquote><div><br></div><div>Understood -- will explore VDO. Storage usage efficiency is less important than fault tolerance or performance for us -- disperse volumes seemed to tick all the boxes so we looked at them primarily.</div><div>But clearly I had missed that they are not used as mainstream VM storage for oVirt (I did know they weren't supported, but as explained thought was more on the management side).</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
Also with replica volumes, you can use 'choose-local' /in case you have faster than the network storage (like NVMe)/ and increase the read speed. Of course this feature is useful for Hyperconverged setup (gluster + ovirt on the same node).<br></blockquote><div><br></div><div>Will explore this option as well, thanks for the suggestion.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
If you were using ovirt 4.3 , I would recommend you to focus on gluster. Yet, you use oVirt 4.4 which is quite newer and it needs some polishing.<br></blockquote><div><br></div><div>Ovirt 4.3.9 (using the older Centos 7 qemu/libvirt) unfortunately had similar issues with the disperse volumes. Not sure if exactly the same, as never looked deeper into it, but the results were similar.</div><div>Ovirt 4.4.0 has some issues with snapshot deletion that are independent from Gluster (I have raised the issue here, <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1840414">https://bugzilla.redhat.com/show_bug.cgi?id=1840414</a>, should be sorted with 4.4.2 I guess), so at the moment it only works with the "testing" AV repo.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Check ovirt engine logs (on the HostedEngine VM or your standalone engine) , vdsm logs on the host that was running the VM and next - check the brick logs.<br></blockquote><div><br></div><div>Will do.</div><div><br>Thanks,</div><div>Marco</div><div><br></div></div></div>