[Gluster-users] NFS Ganesha on Glusterfs for VM workload tuning

Strahil Nikolov hunter86_bg at yahoo.com
Wed Jun 16 19:06:13 UTC 2021

virt group of settings are not specifically for FUSE/libgfapi , they are suitable for Virtualization workload and should be OK for a VM that can be live migrated to another host.
Can you rpvode some details on the disks ?Are you using JBOD or a HWRaid ?Did you align your storage properly  matching the underlying hardware ?
What OS & FS is used ? Mount options ?
What is your I/O scheduler in the VMs?
Best Regards,Strahil Nikolov 
  On Wed, Jun 16, 2021 at 17:06, levin ng<levindecaro at gmail.com> wrote:   Hi,
I'd go through all the gluster and ganesha-nfs documentation, but cannot found a reference setting specific for NFS Ganseha on Gluster for VM workload. There are oVIrt profile which sound like they optimize for fuse/libgfapi. and there are bugs reported quite a bit on cache problems which lead to corruption. I'm using the below setting which is not satisfied with it's performance. The latency most are WRITE fop in general from volume profile. However, compared with NFS mount and Local disk performance, it is quite slow.
Does anyone have suggestions on improving overall performance?
Thank youRegards,Levin
fio -filename=./testfile.bin -direct=1 -iodepth 16 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=8k -size=1000M -numjobs=30 -runtime=100 -group_reporting -name=mytest

[In Guest, Virt SCSI]   READ: bw=11.0MiB/s (12.6MB/s), 11.0MiB/s-11.0MiB/s (12.6MB/s-12.6MB/s), io=1198MiB (1257MB), run=100013-100013msec
  WRITE: bw=5267KiB/s (5394kB/s), 5267KiB/s-5267KiB/s (5394kB/s-5394kB/s), io=514MiB (539MB), run=100013-100013msec

[NFS4.1 @10GbE]   READ: bw=86.9MiB/s (91.2MB/s), 86.9MiB/s-86.9MiB/s (91.2MB/s-91.2MB/s), io=8694MiB (9116MB), run=100001-100001msec
  WRITE: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=3728MiB (3909MB), run=100001-100001msec

[Local ZFS sync=always recordsize=128k, compression=on]   READ: bw=585MiB/s (613MB/s), 585MiB/s-585MiB/s (613MB/s-613MB/s), io=20.5GiB (22.0GB), run=35913-35913msec
  WRITE: bw=251MiB/s (263MB/s), 251MiB/s-251MiB/s (263MB/s-263MB/s), io=9000MiB (9438MB), run=35913-35913msec

Volume Name: vol1
Type: Replicate
Volume ID: dfdb919e-cbf2-4f57-b6f2-1035459ef8fc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Brick1: sds-2:/hdpool1/hg2/brick1
Brick2: sds-3:/hdpool1/hg2/brick1
Brick3: arb-1:/arbiter/hg2/brick1 (arbiter)
Options Reconfigured:
cluster.eager-lock: on
features.cache-invalidation-timeout: 15
features.shard: on
features.shard-block-size: 512MB
ganesha.enable: on
features.cache-invalidation: on
performance.io-cache: off
cluster.choose-local: true
performance.low-prio-threads: 32
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
user.cifs: off
client.event-threads: 16
server.event-threads: 16
network.ping-timeout: 20
server.tcp-user-timeout: 20
cluster.lookup-optimize: off
performance.write-behind: off
performance.flush-behind: off
performance.cache-size: 0
performance.io-thread-count: 64
performance.high-prio-threads: 64
performance.normal-prio-threads: 64
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable
nfs-ganesha: enable

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210616/3d2e2e6a/attachment.html>

More information about the Gluster-users mailing list