MTU -> the maximum without fragmentation (you can check with 'ping -M do -s <size-28>).<div id="yMail_cursorElementTracker_1617078154363"><br></div><div id="yMail_cursorElementTracker_1617078154531"><br> LACP is also good. On my lab I use layer 3+4 hashing (ip+port) to spred the load over multiple links.</div><div id="yMail_cursorElementTracker_1617078203981"><br></div><div id="yMail_cursorElementTracker_1617078204110">If you use HW raid, don't forget to align LVM & XFS as per <a id="linkextractor__1617078229428" data-yahoo-extracted-link="true" href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/brick_configuration">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/brick_configuration</a></div><div id="yMail_cursorElementTracker_1617078229488"><br></div><div id="yMail_cursorElementTracker_1617078239275">Also consider disabling transparent huge pages and setting cstates on the hosts (check relevant vendor articles).</div><div id="yMail_cursorElementTracker_1617078281562"><br></div><div id="yMail_cursorElementTracker_1617078281707">P.S: with the new RH developer program, you can access all RH solutions</div><div id="yMail_cursorElementTracker_1617078239491"><br></div><div id="yMail_cursorElementTracker_1617078229676">Best Regards,</div><div id="yMail_cursorElementTracker_1617078234285">Strahil Nikolov</div><div id="yMail_cursorElementTracker_1617078319446"><br></div><div id="yMail_cursorElementTracker_1617078319616"><br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Mon, Mar 29, 2021 at 22:38, Arman Khalatyan</div><div><arm2arm@gmail.com> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> <div id="yiv9611256875"><div><div><div>great hints, thanks a lot, going to test all this tomorrow.</div><div>I have an appointment with IT department for enabling the LACP bonds on 10gig dual port interfaces, so next will test the latency, and ssd+xfs+glusterfs.</div><div>we did not touch default mtu, i did some tests with CentOS7.3 on IB with 65k mtu connected mode ipoib, was ok but not stable on the large workloads, maybe now the situation has been changed.</div><div>are there any suggestions on lacp-bonds and mtu sizes?</div><div><br clear="none"></div><div><br clear="none"><br clear="none"><div class="yiv9611256875gmail_quote"><div class="yiv9611256875gmail_attr" dir="ltr">Strahil Nikolov <<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:hunter86_bg@yahoo.com" target="_blank" href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> schrieb am Mo., 29. März 2021, 19:02:<br clear="none"></div><div class="yiv9611256875yqt9284392269" id="yiv9611256875yqt27126"><blockquote class="yiv9611256875gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">I guess there is no need to mention that lattency is the real killer of Gluster. What is the DC-to-DC lattency and MTU (ethernet) ?<div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617036805306">Also, if you use SSDs , consider using noop/none I/O schedulers.</div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617036874320"><br clear="none"></div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617036874513">Also, you can obtain the tuned profiles used in Red Hat Gluster Storage via this source rpm:</div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617037064167"><br clear="none"></div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617037064645"><a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.5.0.0-7.el7rhgs.src.rpm">http://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.5.0.0-7.el7rhgs.src.rpm</a></div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617037067890"><br clear="none"></div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617037068340">You can combine the settings from the tuned profile for Hypervisor and combine it with the gluster random I/O tuned profile.</div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617037182745"><br clear="none"></div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617037182916">Also worth mentioning, RHGS uses 512M shard size, while default in upstream gluster is just 64M. Some oVirt users have reported issues and suspect is gluster's inability to crwate enough shards.</div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617037311178"><br clear="none"></div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617037311375">WARNING: ONCE SHARDING IS ENABLED, NEVER EVER DISABLE IT.</div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617037149614"><br clear="none"></div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617037149834">Best Regards,</div><div id="yiv9611256875m_-3785411564711768569yMail_cursorElementTracker_1617037154491">Strahil Nikolov<br clear="none"> <br clear="none"> <blockquote style="margin:0 0 20px 0;"> <div style="font-family:Roboto, sans-serif;color:#6d00f6;"> <div>On Mon, Mar 29, 2021 at 11:03, Arman Khalatyan</div><div><<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:arm2arm@gmail.com" target="_blank" href="mailto:arm2arm@gmail.com">arm2arm@gmail.com</a>> wrote:</div> </div> <div style="padding:10px 0 0 20px;margin:10px 0 0 0;border-left:1px solid #6d00f6;"> <div id="yiv9611256875m_-3785411564711768569yiv7383663816"><div><div dir="ltr"><div><div>Thanks Strahil,</div><div>good point on choose-local, definitely we will try.</div><div>the connection is: 10Gbit, also FDR Infiniband( ipoib will be used).</div><div>we are still experimenting with 2 buildings and 8 nodes ovirt+ changing the bricks number on glusterfs. <br clear="none"><br clear="none"><div><div dir="ltr">Strahil Nikolov <<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:hunter86_bg@yahoo.com" target="_blank" href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> schrieb am So., 28. März 2021, 00:35:<br clear="none"></div><div id="yiv9611256875m_-3785411564711768569yiv7383663816yqt50377"><blockquote style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">It's worth mentioning that if your network bandwidth is smaller than the raid bandwidth, you can consider to enable the cluster.choose-local (which oVirt's optimizations disable) for faster reads.<div id="yiv9611256875m_-3785411564711768569yiv7383663816m_-5271630689013369476m_-159553019428442139yMail_cursorElementTracker_1616887990856"><br clear="none"></div><div id="yiv9611256875m_-3785411564711768569yiv7383663816m_-5271630689013369476m_-159553019428442139yMail_cursorElementTracker_1616888030346">Some people would also consider going with JBOD (replica 3) mode. I guess you can test both prior moving to prod phase </div><div id="yiv9611256875m_-3785411564711768569yiv7383663816m_-5271630689013369476m_-159553019428442139yMail_cursorElementTracker_1616888080152"><br clear="none"></div><div id="yiv9611256875m_-3785411564711768569yiv7383663816m_-5271630689013369476m_-159553019428442139yMail_cursorElementTracker_1616888080396"><br clear="none"></div><div id="yiv9611256875m_-3785411564711768569yiv7383663816m_-5271630689013369476m_-159553019428442139yMail_cursorElementTracker_1616887999939">P.S.: Don't forget to align the LVM/FS layer to the hardware raid.</div><div id="yiv9611256875m_-3785411564711768569yiv7383663816m_-5271630689013369476m_-159553019428442139yMail_cursorElementTracker_1616888027871"><br clear="none"></div><div id="yiv9611256875m_-3785411564711768569yiv7383663816m_-5271630689013369476m_-159553019428442139yMail_cursorElementTracker_1616887991058">Best Regards,</div><div id="yiv9611256875m_-3785411564711768569yiv7383663816m_-5271630689013369476m_-159553019428442139yMail_cursorElementTracker_1616887995872">Strahil Nikolov</div></blockquote></div></div></div></div>
</div>
</div></div> </div> </blockquote></div></blockquote></div></div></div></div>
</div></div> </div> </blockquote></div>