[Gluster-users] General questions

Strahil hunter86_bg at yahoo.com
Thu Jun 20 10:26:18 UTC 2019


Hi,

Are you planing to use oVirt or plain KVM or openstack?

I would recommend you to use gluster v6.1 as it is the latest stable version and will have longer support than the older versions.

Fuse vs libgfapi - use the latter as it has better performance and less overhead on the host.oVirt does supports both libgfapi and fuse.

Also, use replica 3 because you will have better read performance compared to replica 2 arbiter 1.

Sharding is a tradeoff  between CPU (when there is no sharding , gluster shd must calculate the offset of the VM disk) and bandwidth (whole shard  is being replicated despite even 512 need to be synced).

If you will do live migration -  you do not want to cache in order to avoid  corruption.
Thus oVirt is using direct I/O.
Still, you can check the gluster settings mentioned in Red Hat documentation for Virt/openStack .

Best Regards,
Strahil Nikolov

On Jun 20, 2019 13:12, Cristian Del Carlo <cristian.delcarlo at targetsolutions.it> wrote:
>
> Hi,
>
> I'm testing glusterfs before using it in production, it should be used to store vm for nodes with libvirtd.
>
> In production I will have 4 nodes connected with a dedicated 20gbit/s network.
>
> Which version to use in production on a centos 7.x? Should I use Gluster version 6?
>
> To make the volume available to libvirtd the best method is to use FUSE?
>
> I see that stripped is deprecated. Is it reasonable to use the volume with 3 replicas on 4 nodes and  sharding enabled? 
> Is there convenience to use sharding volume in this context? I think could positive inpact in read performance or rebalance. Is it true?
>
> In the vm configuration I use the virtio disk. How is it better to set the disk cache to get the best performances none, default or writeback?
>
> Thanks in advance for your patience and answers.
>
> Thanks,
>
>
> Cristian Del Carlo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190620/8fb082a4/attachment.html>


More information about the Gluster-users mailing list