[Gluster-users] General questions

Strahil Nikolov hunter86_bg at yahoo.com
Thu Jun 20 20:13:37 UTC 2019


 Sharding is complex. It helps to heal faster -as only the shards that got changed will be replicated, but imagine a 1GB shard that got only 512k updated - in such case you will copy the whole shard to the other replicas.RHV & oVirt use a default shard size of 4M which is the exact size of the default PE in LVM.
On the other side, it speeds stuff as gluster can balance the shards properly on the replicas and thus you can evenly distribute the load on the cluster.It is not a coincidence that RHV and oVirt use sharding by default.
Just a warning.NEVER, EVER, DISABLE SHARDING!!! ONCE ENABLED - STAYS ENABLED!Don't ask how I learnt that :)
Best Regards,Strahil Nikolov


    В четвъртък, 20 юни 2019 г., 18:32:00 ч. Гринуич+3, Cristian Del Carlo <cristian.delcarlo at targetsolutions.it> написа:  
 
 Hi,
thanks for your help.
I am planing to use libvirtd with plain KVM.
Ok i will use libgfapi. 

I'm confused about the use of sharding is it useful in this configuration? Doesn't sharding help limit the bandwidth in the event of a rebalancing?
In the vm setting so i need to use directsync to avoid corruption.
Thanks again,
 
Il giorno gio 20 giu 2019 alle ore 12:25 Strahil <hunter86_bg at yahoo.com> ha scritto:


Hi,

Are you planing to use oVirt or plain KVM or openstack?

I would recommend you to use gluster v6.1 as it is the latest stable version and will have longer support than the older versions.

Fuse vs libgfapi - use the latter as it has better performance and less overhead on the host.oVirt does supports both libgfapi and fuse.

Also, use replica 3 because you will have better read performance compared to replica 2 arbiter 1.

Sharding is a tradeoff  between CPU (when there is no sharding , gluster shd must calculate the offset of the VM disk) and bandwidth (whole shard  is being replicated despite even 512 need to be synced).

If you will do live migration -  you do not want to cache in order to avoid  corruption.
Thus oVirt is using direct I/O.
Still, you can check the gluster settings mentioned in Red Hat documentation for Virt/openStack .

Best Regards,
Strahil Nikolov
On Jun 20, 2019 13:12, Cristian Del Carlo <cristian.delcarlo at targetsolutions.it> wrote:

Hi,
I'm testing glusterfs before using it in production, it should be used to store vm for nodes with libvirtd.

In production I will have 4 nodes connected with a dedicated 20gbit/s network.
Which version to use in production on a centos 7.x? Should I use Gluster version 6?
To make the volume available to libvirtd the best method is to use FUSE?
I see that stripped is deprecated. Is it reasonable to use the volume with 3 replicas on 4 nodes and  sharding enabled? 
Is there convenience to use sharding volume in this context? I think could positive inpact in read performance or rebalance. Is it true?

In the vm configuration I use the virtio disk. How is it better to set the disk cache to get the best performances none, default or writeback?
Thanks in advance for your patience and answers.
Thanks,


Cristian Del Carlo




-- 


Cristian Del Carlo
Target Solutions s.r.l.

T +39 0583 1905621F +39 0583 1905675@ cristian.delcarlo at targetsolutions.it

http://www.targetsolutions.it
P.IVA e C.Fiscale: 01815270465  Reg. Imp. di Lucca
Capitale Sociale:  €11.000,00 iv - REA n° 173227

Il testo e gli eventuali documenti trasmessi contengono informazioni riservate al destinatario indicato. La seguente e-mail e' confidenziale e la sua riservatezza e' tutelata legalmente dal Decreto Legislativo 196 del 30/06/2003 (Codice di tutela della privacy). La lettura, copia o altro uso non autorizzato o qualsiasi altra azione derivante dalla conoscenza di queste informazioni sono rigorosamente vietate. Qualora abbiate ricevuto questo documento per errore siete cortesemente pregati di darne immediata comunicazione al mittente e di provvedere, immediatamente, alla sua distruzione.  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190620/118228b3/attachment.html>


More information about the Gluster-users mailing list