[Gluster-users] any one uses all flash GlusterFS setup?
arm2arm at gmail.com
Sat Mar 27 13:18:06 UTC 2021
Thank you Adnan!!
You gave us right hints.
are you letting ovirt to manage GlusterFS? or its an external storage?
we are in trouble with 2+1 hdd based volumes,itself the boxes are fast and
reliable as zfs+nfs store.
with gluster as far we have normal load everything is working well, but
once one of the bricks after reboot started healing process a lot of vms
got i/o failure and timeouts.
we never got this with the single zfs+nfs box.
now we are planning 3 boxes in each box 2 HHHL NVME 3.8TB
samsungs(special dev) +8x1.9TB storage as a zfsbox, so they will be glued
as 2+1 glusterfs.
Mahdi Adnan <mahdi at sysmin.io> schrieb am Fr., 26. März 2021, 22:08:
> Hello Arman,
> We have several volumes running all flash bricks hosting VMs for
> RHV/oVirt. as far as I know, there's no profile specifically for SSD, we
> just use the usual virt group for the volume which has the essential
> options for the volume to be used for VMs.
> I have no experience with Gluster + ZFS so I can not comment on this.
> my volumes are running in replica 3, we had huge performance impact with
> 2+1 because our arbiter was running in HDD.
> One of the volumes we have is generating around 16k of WR IOps and even
> during upgrade process which involves healing of 100k files when nodes
> reboot, we see no performance issues at all.
> On Fri, Mar 26, 2021 at 6:22 PM Arman Khalatyan <arm2arm at gmail.com> wrote:
>> hello everyone,
>> can someone please share his experience with all flash GlusterFS setups?
>> I am planning to use it in 2+1 with ovirt for the critical VMs.
>> plan is to have zfs with ssds in raid6 + pcie NVMe for the special dev.
>> what kind of tuning should we put in GlusterFS side? any ssd profiles
>> Community Meeting Calendar:
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users