<div id="yiv4850469029">According to RH,<div>the most optimal would be to have:</div><div>- Disk size: 3-4TB (faster resync after failure)</div><div>- Disk count: 10-12</div><div>- HW raid : As you can also see on the picture that the optimal one for writes is RAID10 <a rel="nofollow noopener noreferrer" shape="rect" id="yiv4850469029linkextractor__1668929192084" target="_blank" href="https://community.hpe.com/t5/servers-systems-the-right/what-are-raid-levels-and-which-are-best-for-you/ba-p/7041151">https://community.hpe.com/t5/servers-systems-the-right/what-are-raid-levels-and-which-are-best-for-you/ba-p/7041151</a></div><div><br clear="none"></div><div>The full stripe size should be between 1MB and 2MB (prefer staying closer to the 1MB).</div><div><br></div><div>I'm not sure of the HW Raid controller capabilities, but I would also switch the I/O scheduler to 'none' (First-In First-out while merging the requests).Enaure that you have a battery-backed cache and the cache ratio of the controller is leaning towards the writes (something like 25% read, 75% write).</div><div><br></div><div>Jumbo Frames are recomended but not mandatory.Still, they will reduce the number of packets processed by your infrastructure which is always benefitial.</div><div><br></div><div>Tuned Profile:</div><div>You can find the tuned profiles that were usually shipped with Red Hat'sGluster Storage at <a id="linkextractor__1668929683597" data-yahoo-extracted-link="true" href="https://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.5.0.0-8.el7rhgs.src.rpm">https://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.5.0.0-8.el7rhgs.src.rpm</a></div><div><br></div><div>I will type the contents of the random-io profile here, so please double check it for typos.</div><div><br></div><div># /etc/tuned.d/rhgs-random-io/tuned.conf:</div><div>[main]</div><div>include=throughput-performace</div><div><br></div><div>[sysctl]</div><div>vm.dirty_ratio = 5</div><div>vm.dirty_background_ratio = 2</div><div><br></div><div>Don't forget to install tuned before that.</div><div><br></div><div>For small files , Follow the guidelines from <a id="linkextractor__1668930398665" data-yahoo-extracted-link="true" href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/small_file_performance_enhancements" class="lEnhancr_1668930399962">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/small_file_performance_enhancements</a></div><div><br></div><div>Note: Do not use Gluster v9 and update your version to the latest minor one (for example if you use v10 -> update to 10.3). In Gluster v10 a major improvement was done for small files and v9 is out of support now.</div><div><br></div><div>For the XFS: Mount the bricks with 'noatime'. If you use SELINUX , use the following:</div><div>noatime,context="system_u:object_r:glusterd_brick_t:s0"</div><div>Also, consider setting gluster's option  'cluster.min-free-disk' to something that makes sense for you (for details run 'gluster volume set help').</div><div><br></div><div><br></div><div>Of course do benchmarking with the application itself and both before and after you made a change.</div><div><br></div><div>Best Regards,</div><div>Strahil Nikolov </div><div><br></div><div><br clear="none"> <br clear="none"> <blockquote style="margin:0 0 20px 0;"> <div style="font-family:Roboto, sans-serif;color:#6D00F6;"> <div>On Mon, Nov 14, 2022 at 13:33, beer Ll</div><div id="yiv4850469029yqtfd62892" class="yiv4850469029yqt0748171736"><div><llcfhllml@gmail.com> wrote:</div> </div></div><div id="yiv4850469029yqtfd83274" class="yiv4850469029yqt0748171736"> <div style="padding:10px 0 0 20px;margin:10px 0 0 0;border-left:1px solid #6D00F6;"> ________<br clear="none"><br clear="none"><br clear="none"><br clear="none">Community Meeting Calendar:<br clear="none"><br clear="none">Schedule -<br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">Bridge: <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">Gluster-users mailing list<br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none"> </div> </div></blockquote></div></div>