[Gluster-users] [ovirt-users] Replicated Glusterfs on top of ZFS

Arman Khalatyan arm2arm at gmail.com
Tue Mar 7 10:06:44 UTC 2017


hi Sahina, yes shard is enabled. actually the setup of the gluster was
generated over the ovirt GUI
I putall configs here:
http://arm2armcos.blogspot.de/2017/03/glusterfs-zfs-ovirt-rdma.html


On Tue, Mar 7, 2017 at 8:08 AM, Sahina Bose <sabose at redhat.com> wrote:

>
>
> On Mon, Mar 6, 2017 at 3:21 PM, Arman Khalatyan <arm2arm at gmail.com> wrote:
>
>>
>>
>> On Fri, Mar 3, 2017 at 7:00 PM, Darrell Budic <budic at onholyground.com>
>> wrote:
>>
>>> Why are you using an arbitrator if all your HW configs are identical?
>>> I’d use a true replica 3 in this case.
>>>
>>>
>> This was just GIU suggestion when I was creating the cluster it was
>> asking for the 3 Hosts , I did not knew even that an Arbiter does not keep
>> the data.
>> I am not so sure if I can change the type of the glusterfs to triplicated
>> one in the running system, probably I need to destroy whole cluster.
>>
>>
>>
>>> Also in my experience with gluster and vm hosting, the ZIL/slog degrades
>>> write performance unless it’s a truly dedicated disk. But I have 8 spinners
>>> backing my ZFS volumes, so trying to share a sata disk wasn’t a good zil.
>>> If yours is dedicated SAS, keep it, if it’s SATA, try testing without it.
>>>
>>>
>> We  have also several huge systems running with zfs quite successful over
>> the years. This was an idea to use zfs + glusterfs for the HA solutions.
>>
>>
>>> You don’t have compression enabled on your zfs volume, and I’d recommend
>>> enabling relatime on it. Depending on the amount of RAM in these boxes, you
>>> probably want to limit your zfs arc size to 8G or so (1/4 total ram or
>>> less). Gluster just works volumes hard during a rebuild, what’s the problem
>>> you’re seeing? If it’s affecting your VMs, using shading and tuning client
>>> & server threads can help avoid interruptions to your VMs while repairs are
>>> running. If you really need to limit it, you can use cgroups to keep it
>>> from hogging all the CPU, but it takes longer to heal, of course. There are
>>> a couple older posts and blogs about it, if you go back a while.
>>>
>>
>> Yes I saw that glusterfs is CPU/RAM hugry!!! 99% of all 16 cores used
>> just for healing 500GB vm disks. It was taking almost infinity compare with
>> nfs storage (single disk+zfs ssd cache, for sure one get an penalty for
>> the  HA:) )
>>
>
> Is your gluster volume configured to use sharding feature? Could you
> provide output of gluster vol info?
>
>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170307/54816153/attachment.html>


More information about the Gluster-users mailing list