[Gluster-users] any one uses all flash GlusterFS setup?

Mahdi Adnan mahdi at sysmin.io
Sat Mar 27 18:07:08 UTC 2021


 We tried managing Gluster using oVirt but we had a lot of issues and found
it rather limiting for us. for the past 4 years, we've been working with
Gluster directly.


On Sat, Mar 27, 2021 at 4:18 PM Arman Khalatyan <arm2arm at gmail.com> wrote:

> Thank you Adnan!!
> You gave us right hints.
> are you letting ovirt to manage GlusterFS? or its an external storage?
> we are in trouble with 2+1 hdd based volumes,itself the boxes are fast and
> reliable as zfs+nfs store.
> with gluster as far we have normal load everything is working well, but
> once one of the bricks after reboot started healing process a lot of vms
> got i/o failure and timeouts.
> we never got this with the single zfs+nfs box.
>
> now we are planning 3 boxes  in each box  2 HHHL NVME 3.8TB
> samsungs(special dev) +8x1.9TB storage as a zfsbox, so they will be glued
> as 2+1 glusterfs.
>
>
>
>
>
> Mahdi Adnan <mahdi at sysmin.io> schrieb am Fr., 26. März 2021, 22:08:
>
>> Hello Arman,
>>
>>  We have several volumes running all flash bricks hosting VMs for
>> RHV/oVirt. as far as I know, there's no profile specifically for SSD, we
>> just use the usual virt group for the volume which has the essential
>> options for the volume to be used for VMs.
>> I have no experience with Gluster + ZFS so I can not comment on this.
>> my volumes are running in replica 3, we had huge performance impact with
>> 2+1 because our arbiter was running in HDD.
>> One of the volumes we have is generating around 16k of WR IOps and even
>> during upgrade process which involves healing of 100k files when nodes
>> reboot, we see no performance issues at all.
>>
>> On Fri, Mar 26, 2021 at 6:22 PM Arman Khalatyan <arm2arm at gmail.com>
>> wrote:
>>
>>> hello everyone,
>>> can someone please share his experience with all flash GlusterFS setups?
>>> I am planning to use it in 2+1 with ovirt for the critical VMs.
>>> plan is to have zfs with ssds in raid6 + pcie NVMe for the special dev.
>>>
>>> what kind of tuning should we put in GlusterFS side? any ssd profiles
>>> exists?
>>>
>>> thanks
>>> Arman.
>>>
>>> ________
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>> --
>> Respectfully
>> Mahdi
>>
>

-- 
Respectfully
Mahdi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210327/7e47f79f/attachment.html>


More information about the Gluster-users mailing list