[Gluster-users] Brick Reboot => VMs slowdown, client crashes

Darrell Budic budic at onholyground.com
Mon Aug 19 20:15:46 UTC 2019


/var/lib/glusterd/groups/virt is a good start for ideas, notably some thread settings and choose-local=off to improve read performance. If you don’t have at least 10 cores on your servers, you may want to lower the recommended shd-max-threads=8 to no more than half your CPU cores to keep healing from swamping out regular work.

It’s also starting to depend on what your backing store and networking setup are, so you’re going to want to test changes and find what works best for your setup.

In addition to the virt group settings, I use these on most of my volumes, SSD or HDD backed, with the default 64M shard size:

performance.io-thread-count: 32		# seemed good for my system, particularly a ZFS backed volume with lots of spindles
client.event-threads: 8				
cluster.data-self-heal-algorithm: full	# 10G networking, uses more net/less cpu to heal. probably don’t use this for 1G networking?
performance.stat-prefetch: on
cluster.read-hash-mode: 3			# distribute reads to least loaded server (by read queue depth)

and these two only on my HDD backed volume:

performance.cache-size: 1G
performance.write-behind-window-size: 64MB

but I suspect these two need another round or six of tuning to tell if they are making a difference.

I use the throughput-performance tuned profile on my servers, so you should be in good shape there.

> On Aug 19, 2019, at 12:22 PM, Guy Boisvert <guy.boisvert at ingtegration.com> wrote:
> 
> On 2019-08-19 12:08 p.m., Darrell Budic wrote:
>> You also need to make sure your volume is setup properly for best performance. Did you apply the gluster virt group to your volumes, or at least features.shard = on on your VM volume?
> 
> That's what we did here:
> 
> 
> gluster volume set W2K16_Rhenium cluster.quorum-type auto
> gluster volume set W2K16_Rhenium network.ping-timeout 10
> gluster volume set W2K16_Rhenium auth.allow \*
> gluster volume set W2K16_Rhenium group virt
> gluster volume set W2K16_Rhenium storage.owner-uid 36
> gluster volume set W2K16_Rhenium storage.owner-gid 36
> gluster volume set W2K16_Rhenium features.shard on
> gluster volume set W2K16_Rhenium features.shard-block-size 256MB
> gluster volume set W2K16_Rhenium cluster.data-self-heal-algorithm full
> gluster volume set W2K16_Rhenium performance.low-prio-threads 32
> 
> tuned-adm profile random-io        (a profile i added in CentOS 7)
> 
> 
> cat /usr/lib/tuned/random-io/tuned.conf
> ===========================================
> [main]
> summary=Optimize for Gluster virtual machine storage
> include=throughput-performance
> 
> [sysctl]
> 
> vm.dirty_ratio = 5
> vm.dirty_background_ratio = 2
> 
> 
> Any more optimization to add to this?
> 
> 
> Guy
> 
> -- 
> Guy Boisvert, ing.
> IngTegration inc.
> http://www.ingtegration.com
> https://www.linkedin.com/in/guy-boisvert-8990487
> 
> AVIS DE CONFIDENTIALITE : ce message peut contenir des
> renseignements confidentiels appartenant exclusivement a
> IngTegration Inc. ou a ses filiales. Si vous n'etes pas
> le destinataire indique ou prevu dans ce  message (ou
> responsable de livrer ce message a la personne indiquee ou
> prevue) ou si vous pensez que ce message vous a ete adresse
> par erreur, vous ne pouvez pas utiliser ou reproduire ce
> message, ni le livrer a quelqu'un d'autre. Dans ce cas, vous
> devez le detruire et vous etes prie d'avertir l'expediteur
> en repondant au courriel.
> 
> CONFIDENTIALITY NOTICE : Proprietary/Confidential Information
> belonging to IngTegration Inc. and its affiliates may be
> contained in this message. If you are not a recipient
> indicated or intended in this message (or responsible for
> delivery of this message to such person), or you think for
> any reason that this message may have been addressed to you
> in error, you may not use or copy or deliver this message to
> anyone else. In such case, you should destroy this message
> and are asked to notify the sender by reply email.
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190819/628c4982/attachment.html>


More information about the Gluster-users mailing list