<div dir="ltr"><div>Hi Strahil</div><div>thank you for your email</div><div><br></div><div>HW raid with HP Smart Array p840</div><div>8 disks SAS 8TB <br></div><div>RAID 6 -> 43,66TB</div><div><br></div><div>strip size: 128KB</div><div>full stripe size : 768KB</div><div><br></div><div><br></div><div>in this case no thin LVM <br></div><div><br></div><div>xfs with standard option of Debian isize=512 <br></div><div><br></div><div>no glusterfs tuning <br></div><div><br></div><div>no jumbo frames <br></div><div><br></div><div>I share the gluster volume with nfs (ganesha) , the application is on another server in LAN .</div><div><br></div><div>Best regards</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Nov 13, 2022 at 8:59 AM Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<div><br></div><div>First you need to identify what kind of workload you will have.</div><div>Some optimizations for one workload can prevent better performance in another type.</div><div><br></div><div>If you plan to use the volume for small files , this document is a good start <a href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/small_file_performance_enhancements" target="_blank">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/small_file_performance_enhancements</a><br><div>Replica 2 volumes are prone to split brain and it's always recommended to have a replica 3 or an arbiter .</div><div><br></div><div>As a best practice - always test the volume with the application that will use it as synthetic benchmarks are just synthetics.</div><div><br></div><div>I always start performance review from the bottom (infrastructure) and end on application level.</div><div><br></div><div>You can start with <a href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance" target="_blank">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance</a> as the storage is one of the most important parts.</div><div><br></div><div>What kind of HW raid , how many disks, stripe size and stripe width that you used in LVM?</div><div>Do you use thin LVM ?</div><div>How did you create your XFS (isize is critical) ?</div><div><br></div><div>Have you used gluster's tuned profile ?</div><div><br></div><div>Jumbo frames ?</div><div><br></div><div>Then you will need to optimize the volume for small files (see the link above).</div><div><br></div><div>Does you app allow you to use libgfapi ? Based on my observations in the oVirt list, libgfapi it used to provide some performance benefits compared to fuse .</div><div><br></div><div>Also, if you work with very small files, it would make sense to combine them in some container (like in VM disks).</div><div><br></div><div>Keep in mind that GlusterFS performance scales with the size of the cluster and the number of clients. For ultra high performance for a few clients -> there are other options.</div><div><br></div><div>Best Regards,</div><div>Strahil Nikolov </div><div> <br> <blockquote style="margin:0px 0px 20px"> <div style="font-family:Roboto,sans-serif;color:rgb(109,0,246)"> <div>On Wed, Nov 9, 2022 at 12:05, beer Ll</div><div><<a href="mailto:llcfhllml@gmail.com" target="_blank">llcfhllml@gmail.com</a>> wrote:</div> </div> <div style="padding:10px 0px 0px 20px;margin:10px 0px 0px;border-left:1px solid rgb(109,0,246)"> <div id="m_8498909766960386601yiv2181761839"><div dir="ltr"><div>Hi <br></div><div><br></div><div>I have 2 gluster server (backup1 , backup2) connected with 10Gbit link</div><div>glusterfs version 10.3-1 server and client<br></div><div><br></div><div>Each server is with 44T disk raid 6 with 1 partition used for the brick<br></div><div><br></div><div> /dev/VG_BT_BRICK1/LV_BT_BRICK1 [ 43.66 TiB] </div><div><br></div><div>/dev/mapper/VG_BT_BRICK1-LV_BT_BRICK1 on /mnt/glusterfs type xfs (rw,noatime,nouuid,attr2,inode64,sunit=256,swidth=1536,noquota)</div><div><br></div><div><br></div><div>I create a replica volume named backup<br></div><div><br></div><div>Volume Name: backup<br>Type: Replicate<br>Number of Bricks: 1 x 2 = 2<br>Transport-type: tcp<br>Bricks:<br>Brick1: backup-1:/mnt/glusterfs/brick1<br>Brick2: backup-2:/mnt/glusterfs/brick1<br>Options Reconfigured:<br>cluster.granular-entry-heal: on<br>storage.fips-mode-rchecksum: on<br>transport.address-family: inet<br>nfs.disable: on<br>performance.client-io-threads: off<br>cluster.enable-shared-storage: enable</div><div><br></div><div><br></div><div>The volume backup is mounted with gluster client on /mnt/share<br></div><div><br></div><div>backup-1:/backup on /mnt/share type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)<br></div><div><br></div><div><br></div><div>Test with smallfile utility <br></div><div><br></div><div>XFS<br></div><div>with filesystem xfs on /mnt/glusterfs <br></div><div><br></div><div>total threads = 8<br>total files = 800<br>100.00% of requested files processed, warning threshold is 70.00%<br>elapsed time = 0.009<br>files/sec = 120927.642211<br></div><div><br></div><div>GLUSTERFS CLIENT<br></div><div>with glusterfs on /mnt/share</div><div><br></div><div>total threads = 8<br>total files = 800<br>100.00% of requested files processed, warning threshold is 70.00%<br>elapsed time = 3.014<br>files/sec = 284.975861<br></div><div><br></div><div><br></div><div><br></div><div>How is it possible to increase the performance in glusterfs volume ? <br></div><div>How is best practice of brick size and replica management ?</div><div>Is better 1 big brick per server or more small bricks distributed ?</div><div><br></div><div>Many Thanks <br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div>
</div>________<br><br><br><br>Community Meeting Calendar:<br><br>Schedule -<br>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br> </div> </blockquote></div></div></blockquote></div>