Due to the nature of the whole flow (FUSE -> Filesystem in User SpacE) there will be more overhead than bear metal.<div id="yMail_cursorElementTracker_1621182767700"><br></div><div id="yMail_cursorElementTracker_1621182767920">Have you tested increasing:</div><div id="yMail_cursorElementTracker_1621182815542">- performance.cache-size</div><div id="yMail_cursorElementTracker_1621182818457">- performance.write-behind-window-size</div><div id="yMail_cursorElementTracker_1621182831860"><br></div><div id="yMail_cursorElementTracker_1621183856735"><br></div><div id="yMail_cursorElementTracker_1621182832967">Also, you should note that the artificial benchmarks usually do not cover your needs. The best way is to test with real-world workload, as tuning for specific workload is reducing the performance for another workload.</div><div id="yMail_cursorElementTracker_1621182967583">For example, optimizing for sequential I/O is requiring options that are opposite to random I/O patterns.</div><div id="yMail_cursorElementTracker_1621183332615"><br></div><div id="yMail_cursorElementTracker_1621183332806">For small file optimizations, you can check <a id="linkextractor__1621183351727" data-yahoo-extracted-link="true" href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/small_file_performance_enhancements" class="lEnhancr_1621183352763">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/small_file_performance_enhancements</a></div><div id="yMail_cursorElementTracker_1621183351777"><br></div><div id="yMail_cursorElementTracker_1621183351974">I/O workloads can be optimized via 2 tuned profiles from <a id="linkextractor__1621183634855" data-yahoo-extracted-link="true" href="http://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.5.0.0-7.el7rhgs.src.rpm">http://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.5.0.0-7.el7rhgs.src.rpm</a></div><div id="yMail_cursorElementTracker_1621183634915">Also you can tune gluster towards your workload by using the gluster settings in /var/lib/glusterd/groups (I wrote it by memory, so check for typos).</div><div id="yMail_cursorElementTracker_1621183672261"><br></div><div id="yMail_cursorElementTracker_1621183672455">In order to get it further debugged, you have to 'profile' your volume as per <a id="linkextractor__1621183801556" data-yahoo-extracted-link="true" href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-monitoring_red_hat_storage_workload" class="lEnhancr_1621183802395">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-monitoring_red_hat_storage_workload</a> and share the output.</div><div id="yMail_cursorElementTracker_1621183826610"><br></div><div id="yMail_cursorElementTracker_1621183826845">Best Regards,</div><div id="yMail_cursorElementTracker_1621183830999">Strahil Nikolov</div><div id="yMail_cursorElementTracker_1621183670438"> <br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Sat, May 15, 2021 at 7:45, Schlick Rupert</div><div><Rupert.Schlick@ait.ac.at> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;" id="yMail_cursorElementTracker_1621182804346"> Dear Felix,<br clear="none"><br clear="none">as requested, volume info, xfs_info, fstab.<br clear="none"><br clear="none">Volume Name: gfs_scratch<br clear="none">Type: Replicate<br clear="none">Volume ID: d99b6154-bf34-49d6-a06b-0e29bfc2a0fb<br clear="none">Status: Started<br clear="none">Snapshot Count: 0<br clear="none">Number of Bricks: 1 x 3 = 3<br clear="none">Transport-type: tcp<br clear="none">Bricks:<br clear="none">Brick1: server3:/data/glusterfs_scratch/gfs_scratch_brick1<br clear="none">Brick2: server2:/data/glusterfs_scratch/gfs_scratch_brick1<br clear="none">Brick3: server1:/data/glusterfs_scratch/gfs_scratch_brick1<br clear="none">Options Reconfigured:<br clear="none">performance.parallel-readdir: on<br clear="none">cluster.self-heal-daemon: enable<br clear="none">features.uss: disable<br clear="none">cluster.server-quorum-type: none<br clear="none">performance.cache-size: 256MB<br clear="none">cluster.granular-entry-heal: on<br clear="none">storage.fips-mode-rchecksum: on<br clear="none">transport.address-family: inet<br clear="none">nfs.disable: on<br clear="none">performance.client-io-threads: off<br clear="none">features.cache-invalidation: on<br clear="none">features.cache-invalidation-timeout: 600<br clear="none">performance.stat-prefetch: on<br clear="none">performance.cache-invalidation: on<br clear="none">performance.md-cache-timeout: 600<br clear="none">network.inode-lru-limit: 200000<br clear="none">performance.write-behind-window-size: 128MB<br clear="none">diagnostics.latency-measurement: on<br clear="none">diagnostics.count-fop-hits: on<br clear="none">snap-activate-on-create: enable<br clear="none">auto-delete: enable<br clear="none"><br clear="none">$ xfs_info /data/glusterfs_scratch<br clear="none">meta-data=/dev/mapper/GVG-GLV_scratch isize=512    agcount=16, agsize=3276768 blks<br clear="none">         =                       sectsz=4096  attr=2, projid32bit=1<br clear="none">         =                       crc=1        finobt=1, sparse=1, rmapbt=0<br clear="none">         =                       reflink=1<br clear="none">data     =                       bsize=4096   blocks=52428288, imaxpct=25<br clear="none">         =                       sunit=32     swidth=128 blks<br clear="none">naming   =version 2              bsize=8192   ascii-ci=0, ftype=1<br clear="none">log      =internal log           bsize=4096   blocks=25599, version=2<br clear="none">         =                       sectsz=4096  sunit=1 blks, lazy-count=1<br clear="none">realtime =none                   extsz=4096   blocks=0, rtextents=0<br clear="none"><br clear="none"># /etc/fstab: static file system information.<br clear="none"># <file system> <mount point>   <type>  <options>       <dump>  <pass><br clear="none">/dev/disk/by-id/dm-uuid-LVM-IHnHtvssE4vRhHeAkSbMX7DDF8uwnlsZsaQvW2bySOYnc17QGlT7FiESTD9GloaL / ext4 defaults 0 0<br clear="none">/dev/disk/by-uuid/7aa78d34-0c19-47dc-85e2-70b54cfb9868 /boot ext4 defaults 0 0<br clear="none">/dev/disk/by-uuid/37A6-0D34 /boot/efi vfat defaults 0 0<br clear="none">/swap.img       none    swap    sw      0       0<br clear="none">UUID=f070d8a9-ade5-451f-9bf6-53de6c1a3789 /data/glusterfs_home xfs inode64,noatime,nodiratime 0 0<br clear="none">UUID=187654ce-99f7-4aea-b2f6-701cea801b01 /data/glusterfs_sw xfs inode64,noatime,nodiratime 0 0<br clear="none">UUID=1176552b-7233-4354-be32-a6dc0e899d64 /data/glusterfs_scratch xfs inode64,noatime,nodiratime 0 0<br clear="none">server1:/gfs_home /home/gfs glusterfs defaults,_netdev 0 0<br clear="none">server1:/gfs_sw /sw glusterfs defaults,_netdev 0 0<br clear="none">server1:/gfs_scratch /scratch glusterfs defaults,_netdev 0 0<br clear="none">/dev/mapper/GVG-gitlab--builds /mnt/gitlab-builds ext4 defaults,noatime,nodiratime 0 0<br clear="none"><br clear="none">Cheers<div class="yqt5121612417" id="yqtfd35405"><br clear="none">Rupert<br clear="none">________<br clear="none"><br clear="none"><br clear="none"><br clear="none">Community Meeting Calendar:<br clear="none"><br clear="none">Schedule -<br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">Bridge: <a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">Gluster-users mailing list<br clear="none"><a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none"><a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none"></div> </div> </blockquote></div>