<p dir="ltr">Hello Community,</p>
<p dir="ltr">I have been left with the impression that FUSE mounts will read from both local and remote bricks , is that right?</p>
<p dir="ltr">I'm using oVirt as a hyperconverged setup and despite my slow network (currently 1 gbit/s, will be expanded soon), I was expecting that at least the reads from the local brick will be fast, yet I can't reach more than 250 MB/s while the 2 data bricks are NVME with much higher capabilities.</p>
<p dir="ltr">Is there something I can do about that ?<br>
Maybe change cluster.choose-local, as I don't see it on my other volumes ?<br>
What are the risks associated with that?</p>
<p dir="ltr">Volume Name: data_fast<br>
Type: Replicate<br>
Volume ID: b78aa52a-4c49-407d-bfd8-fdffb2a3610a<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x (2 + 1) = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: ovirt1:/gluster_bricks/data_fast/data_fast<br>
Brick2: ovirt2:/gluster_bricks/data_fast/data_fast<br>
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)<br>
Options Reconfigured:<br>
performance.client-io-threads: off<br>
nfs.disable: on<br>
transport.address-family: inet<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
<a href="http://performance.io">performance.io</a>-cache: off<br>
performance.low-prio-threads: 32<br>
network.remote-dio: off<br>
cluster.eager-lock: enable<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
cluster.data-self-heal-algorithm: full<br>
cluster.locking-scheme: granular<br>
cluster.shd-max-threads: 8<br>
cluster.shd-wait-qlength: 10000<br>
features.shard: on<br>
user.cifs: off<br>
cluster.choose-local: off<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
performance.strict-o-direct: on<br>
cluster.granular-entry-heal: enable<br>
network.ping-timeout: 30<br>
cluster.enable-shared-storage: enable<br></p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov<br>
</p>