[Gluster-users] Performance Questions - not only small files
Strahil Nikolov
hunter86_bg at yahoo.com
Sun May 16 16:52:00 UTC 2021
Due to the nature of the whole flow (FUSE -> Filesystem in User SpacE) there will be more overhead than bear metal.
Have you tested increasing:- performance.cache-size- performance.write-behind-window-size
Also, you should note that the artificial benchmarks usually do not cover your needs. The best way is to test with real-world workload, as tuning for specific workload is reducing the performance for another workload.For example, optimizing for sequential I/O is requiring options that are opposite to random I/O patterns.
For small file optimizations, you can check https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/small_file_performance_enhancements
I/O workloads can be optimized via 2 tuned profiles from http://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.5.0.0-7.el7rhgs.src.rpmAlso you can tune gluster towards your workload by using the gluster settings in /var/lib/glusterd/groups (I wrote it by memory, so check for typos).
In order to get it further debugged, you have to 'profile' your volume as per https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-monitoring_red_hat_storage_workload and share the output.
Best Regards,Strahil Nikolov
On Sat, May 15, 2021 at 7:45, Schlick Rupert<Rupert.Schlick at ait.ac.at> wrote: Dear Felix,
as requested, volume info, xfs_info, fstab.
Volume Name: gfs_scratch
Type: Replicate
Volume ID: d99b6154-bf34-49d6-a06b-0e29bfc2a0fb
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: server3:/data/glusterfs_scratch/gfs_scratch_brick1
Brick2: server2:/data/glusterfs_scratch/gfs_scratch_brick1
Brick3: server1:/data/glusterfs_scratch/gfs_scratch_brick1
Options Reconfigured:
performance.parallel-readdir: on
cluster.self-heal-daemon: enable
features.uss: disable
cluster.server-quorum-type: none
performance.cache-size: 256MB
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 200000
performance.write-behind-window-size: 128MB
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
snap-activate-on-create: enable
auto-delete: enable
$ xfs_info /data/glusterfs_scratch
meta-data=/dev/mapper/GVG-GLV_scratch isize=512 agcount=16, agsize=3276768 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=52428288, imaxpct=25
= sunit=32 swidth=128 blks
naming =version 2 bsize=8192 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=25599, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
# /etc/fstab: static file system information.
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/disk/by-id/dm-uuid-LVM-IHnHtvssE4vRhHeAkSbMX7DDF8uwnlsZsaQvW2bySOYnc17QGlT7FiESTD9GloaL / ext4 defaults 0 0
/dev/disk/by-uuid/7aa78d34-0c19-47dc-85e2-70b54cfb9868 /boot ext4 defaults 0 0
/dev/disk/by-uuid/37A6-0D34 /boot/efi vfat defaults 0 0
/swap.img none swap sw 0 0
UUID=f070d8a9-ade5-451f-9bf6-53de6c1a3789 /data/glusterfs_home xfs inode64,noatime,nodiratime 0 0
UUID=187654ce-99f7-4aea-b2f6-701cea801b01 /data/glusterfs_sw xfs inode64,noatime,nodiratime 0 0
UUID=1176552b-7233-4354-be32-a6dc0e899d64 /data/glusterfs_scratch xfs inode64,noatime,nodiratime 0 0
server1:/gfs_home /home/gfs glusterfs defaults,_netdev 0 0
server1:/gfs_sw /sw glusterfs defaults,_netdev 0 0
server1:/gfs_scratch /scratch glusterfs defaults,_netdev 0 0
/dev/mapper/GVG-gitlab--builds /mnt/gitlab-builds ext4 defaults,noatime,nodiratime 0 0
Cheers
Rupert
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210516/b471eb66/attachment.html>
More information about the Gluster-users
mailing list