[Gluster-users] replica performance and brick size best practice

beer Ll llcfhllml at gmail.com
Mon Nov 14 11:33:29 UTC 2022


Hi Strahil
thank you for your email

HW raid with HP Smart Array p840
8 disks SAS 8TB
RAID 6 -> 43,66TB

strip size: 128KB
full stripe size : 768KB


in this case no thin LVM

xfs with standard option of Debian isize=512

no glusterfs tuning

no jumbo frames

I share the gluster volume with nfs (ganesha) , the application is on
another server in LAN .

Best regards

On Sun, Nov 13, 2022 at 8:59 AM Strahil Nikolov <hunter86_bg at yahoo.com>
wrote:

> Hi,
>
> First you need to identify what kind of workload you will have.
> Some optimizations for one workload can prevent better performance in
> another type.
>
> If you plan to use the volume for small files , this document is a good
> start
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/small_file_performance_enhancements
> Replica 2 volumes are prone to split brain and it's always recommended to
> have a replica 3 or an arbiter .
>
> As a best practice - always test the volume with the application that will
> use it as synthetic benchmarks are just synthetics.
>
> I always start performance review from the bottom (infrastructure) and end
> on application level.
>
> You can start with
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance
> as the storage is one of the most important parts.
>
> What kind of HW raid , how many disks, stripe size and stripe width that
> you used in LVM?
> Do you use thin LVM ?
> How did you create your XFS (isize is critical) ?
>
> Have you used gluster's tuned profile ?
>
> Jumbo frames ?
>
> Then you will need to optimize the volume for small files (see the link
> above).
>
> Does you app allow you to use libgfapi ? Based on my observations in the
> oVirt list, libgfapi it used to provide some performance benefits compared
> to fuse .
>
> Also, if you work with very small files, it would make sense to combine
> them in some container (like in VM disks).
>
> Keep in mind that GlusterFS performance scales with the size of the
> cluster and the number of clients. For ultra high performance for a few
> clients -> there are other options.
>
> Best Regards,
> Strahil Nikolov
>
> On Wed, Nov 9, 2022 at 12:05, beer Ll
> <llcfhllml at gmail.com> wrote:
> Hi
>
> I have 2 gluster server (backup1 , backup2)  connected with 10Gbit link
> glusterfs version 10.3-1  server and client
>
> Each server is  with 44T disk raid 6 with 1 partition used for the brick
>
>  /dev/VG_BT_BRICK1/LV_BT_BRICK1   [      43.66 TiB]
>
> /dev/mapper/VG_BT_BRICK1-LV_BT_BRICK1 on /mnt/glusterfs type xfs
> (rw,noatime,nouuid,attr2,inode64,sunit=256,swidth=1536,noquota)
>
>
> I create a replica volume named backup
>
> Volume Name: backup
> Type: Replicate
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: backup-1:/mnt/glusterfs/brick1
> Brick2: backup-2:/mnt/glusterfs/brick1
> Options Reconfigured:
> cluster.granular-entry-heal: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> cluster.enable-shared-storage: enable
>
>
> The volume backup is mounted with gluster client on /mnt/share
>
> backup-1:/backup on /mnt/share type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>
>
> Test with smallfile utility
>
> XFS
> with filesystem xfs on  /mnt/glusterfs
>
> total threads = 8
> total files = 800
> 100.00% of requested files processed, warning threshold is  70.00%
> elapsed time =     0.009
> files/sec = 120927.642211
>
> GLUSTERFS CLIENT
> with  glusterfs on /mnt/share
>
> total threads = 8
> total files = 800
> 100.00% of requested files processed, warning threshold is  70.00%
> elapsed time =     3.014
> files/sec = 284.975861
>
>
>
> How is it possible to increase the performance in glusterfs volume ?
> How is best practice of brick size and replica management ?
> Is better 1 big brick per server or more small bricks distributed ?
>
> Many Thanks
>
>
>
>
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20221114/1aff1cec/attachment.html>


More information about the Gluster-users mailing list