<div dir="ltr"><div>Looks good mostly.</div><div>You can also turn on performance.stat-prefetch, and also set client.event-threads and server.event-threads to 4.</div><div>And if your bricks are on ssds, then you could also enable performance.client-io-threads.</div><div>And if your bricks and hypervisors are on same set of machines (hyperconverged),</div><div>then you can turn off cluster.choose-local and see if it helps read performance.</div><div><br></div><div>Do let us know what helped and what didn&#39;t.<br></div><div><br></div><div>-Krutika<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 18, 2019 at 1:05 PM &lt;<a href="mailto:lemonnierk@ulrar.net">lemonnierk@ulrar.net</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
We&#39;ve been using the same settings, found in an old email here, since<br>
v3.7 of gluster for our VM hosting volumes. They&#39;ve been working fine<br>
but since we&#39;ve just installed a v6 for testing I figured there might<br>
be new settings I should be aware of.<br>
<br>
So for access through the libgfapi (qemu), for VM hard drives, is that<br>
still optimal and recommended ?<br>
<br>
Volume Name: glusterfs<br>
Type: Replicate<br>
Volume ID: b28347ff-2c27-44e0-bc7d-c1c017df7cd1<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: ips1adm.X:/mnt/glusterfs/brick<br>
Brick2: ips2adm.X:/mnt/glusterfs/brick<br>
Brick3: ips3adm.X:/mnt/glusterfs/brick<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
network.remote-dio: enable<br>
cluster.eager-lock: enable<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
features.shard: on<br>
features.shard-block-size: 64MB<br>
cluster.data-self-heal-algorithm: full<br>
network.ping-timeout: 30<br>
diagnostics.count-fop-hits: on<br>
diagnostics.latency-measurement: on<br>
transport.address-family: inet<br>
nfs.disable: on<br>
performance.client-io-threads: off<br>
<br>
Thanks !<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>