<div dir="auto"><div>Thank you Adnan!!</div><div dir="auto">You gave us right hints.</div><div dir="auto">are you letting ovirt to manage GlusterFS? or its an external storage?</div><div dir="auto">we are in trouble with 2+1 hdd based volumes,itself the boxes are fast and reliable as zfs+nfs store. </div><div dir="auto">with gluster as far we have normal load everything is working well, but once one of the bricks after reboot started healing process a lot of vms got i/o failure and timeouts.</div><div dir="auto">we never got this with the single zfs+nfs box.</div><div dir="auto"><br></div><div dir="auto">now we are <span style="font-family:sans-serif">planning </span>3 boxes in each box 2 HHHL NVME 3.8TB samsungs(special dev) +8x1.9TB storage as a zfsbox, so they will be glued as 2+1 glusterfs.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br><br><div class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">Mahdi Adnan <<a href="mailto:mahdi@sysmin.io">mahdi@sysmin.io</a>> schrieb am Fr., 26. März 2021, 22:08:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hello Arman,<div><br></div><div> We have several volumes running all flash bricks hosting VMs for RHV/oVirt. as far as I know, there's no profile specifically for SSD, we just use the usual virt group for the volume which has the essential options for the volume to be used for VMs.</div><div>I have no experience with Gluster + ZFS so I can not comment on this. </div><div>my volumes are running in replica 3, we had huge performance impact with 2+1 because our arbiter was running in HDD.</div><div>One of the volumes we have is generating around 16k of WR IOps and even during upgrade process which involves healing of 100k files when nodes reboot, we see no performance issues at all.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Mar 26, 2021 at 6:22 PM Arman Khalatyan <<a href="mailto:arm2arm@gmail.com" target="_blank" rel="noreferrer">arm2arm@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">hello everyone,<div dir="auto">can someone please share his experience with all flash GlusterFS setups?</div><div dir="auto">I am planning to use it in 2+1 with ovirt for the critical VMs.</div><div dir="auto">plan is to have zfs with ssds in raid6 + pcie NVMe for the special dev.</div><div dir="auto"><br></div><div dir="auto">what kind of tuning should we put in GlusterFS side? any ssd profiles exists? </div><div dir="auto"><br></div><div dir="auto">thanks </div><div dir="auto">Arman.</div><div dir="auto"><br></div></div>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank" rel="noreferrer">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr">Respectfully<div>Mahdi</div></div></div>
</blockquote></div></div></div>