<div dir="ltr">Hi,<div><br></div><div>I'd go through all the gluster and ganesha-nfs documentation, but cannot found a reference setting specific for NFS Ganseha on Gluster for VM workload. There are oVIrt profile which sound like they optimize for fuse/libgfapi. and there are bugs reported quite a bit on cache problems which lead to corruption. I'm using the below setting which is not satisfied with it's performance. The latency most are WRITE fop in general from volume profile. However, compared with NFS mount and Local disk performance, it is quite slow.</div><div><br></div><div>Does anyone have suggestions on improving overall performance?</div><div><br></div><div>Thank you</div><div>Regards,</div><div>Levin</div><div><br></div><div>fio -filename=./testfile.bin -direct=1 -iodepth 16 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=8k -size=1000M -numjobs=30 -runtime=100 -group_reporting -name=mytest<br></div><div><div><br></div><div>[In Guest, Virt SCSI]</div><div>   READ: bw=11.0MiB/s (12.6MB/s), 11.0MiB/s-11.0MiB/s (12.6MB/s-12.6MB/s), io=1198MiB (1257MB), run=100013-100013msec<br>  WRITE: bw=5267KiB/s (5394kB/s), 5267KiB/s-5267KiB/s (5394kB/s-5394kB/s), io=514MiB (539MB), run=100013-100013msec<br></div><div><br></div><div>[NFS4.1 @10GbE]</div><div></div></div><div>   READ: bw=86.9MiB/s (91.2MB/s), 86.9MiB/s-86.9MiB/s (91.2MB/s-91.2MB/s), io=8694MiB (9116MB), run=100001-100001msec<br>  WRITE: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=3728MiB (3909MB), run=100001-100001msec<br></div><div><br></div><div>[Local ZFS sync=always recordsize=128k, compression=on]</div><div>   READ: bw=585MiB/s (613MB/s), 585MiB/s-585MiB/s (613MB/s-613MB/s), io=20.5GiB (22.0GB), run=35913-35913msec<br>  WRITE: bw=251MiB/s (263MB/s), 251MiB/s-251MiB/s (263MB/s-263MB/s), io=9000MiB (9438MB), run=35913-35913msec<br></div><div><br></div><div><br></div><div>Volume Name: vol1<br>Type: Replicate<br>Volume ID: dfdb919e-cbf2-4f57-b6f2-1035459ef8fc<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (2 + 1) = 3<br>Transport-type: tcp<br>Bricks:<br>Brick1: sds-2:/hdpool1/hg2/brick1<br>Brick2: sds-3:/hdpool1/hg2/brick1<br>Brick3: arb-1:/arbiter/hg2/brick1 (arbiter)<br>Options Reconfigured:<br>cluster.eager-lock: on<br>features.cache-invalidation-timeout: 15<br>features.shard: on<br>features.shard-block-size: 512MB<br>ganesha.enable: on<br>features.cache-invalidation: on<br>performance.io-cache: off<br>cluster.choose-local: true<br>performance.low-prio-threads: 32<br>cluster.quorum-type: auto<br>cluster.server-quorum-type: server<br>cluster.data-self-heal-algorithm: full<br>cluster.locking-scheme: granular<br>cluster.shd-max-threads: 8<br>cluster.shd-wait-qlength: 10000<br>user.cifs: off<br>client.event-threads: 16<br>server.event-threads: 16<br>network.ping-timeout: 20<br>server.tcp-user-timeout: 20<br>cluster.lookup-optimize: off<br>performance.write-behind: off<br>performance.flush-behind: off<br>performance.cache-size: 0<br>performance.io-thread-count: 64<br>performance.high-prio-threads: 64<br>performance.normal-prio-threads: 64<br>cluster.granular-entry-heal: on<br>storage.fips-mode-rchecksum: on<br>transport.address-family: inet<br>nfs.disable: on<br>performance.client-io-threads: off<br>cluster.enable-shared-storage: enable<br>nfs-ganesha: enable<br></div></div>