<html><head></head><body><div class="ydp768ffdacyahoo-style-wrap" style="font-family: courier new, courier, monaco, monospace, sans-serif; font-size: 16px;"><div></div>
        <div>As I had the option to rebuild the volume - I did it and it still reads quite slower than before 5.6 upgrade.</div><div><br></div><div>I have set <span>cluster.choose-local to 'on' but still the same performance.</span></div><div><span><br></span></div><div><span><span>Volume Name: data_fast<br>Type: Replicate<br>Volume ID: 888a32ea-9b5c-4001-a9c5-8bc7ee0bddce<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (2 + 1) = 3<br>Transport-type: tcp<br>Bricks:<br>Brick1: ovirt1:/gluster_bricks/data_fast/data_fast<br>Brick2: ovirt2:/gluster_bricks/data_fast/data_fast<br>Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)<br>Options Reconfigured:<br>cluster.choose-local: on<br>network.ping-timeout: 30<br>cluster.granular-entry-heal: enable<br>performance.strict-o-direct: on<br>storage.owner-gid: 36<br>storage.owner-uid: 36<br>user.cifs: off<br>features.shard: on<br>cluster.shd-wait-qlength: 10000<br>cluster.shd-max-threads: 8<br>cluster.locking-scheme: granular<br>cluster.data-self-heal-algorithm: full<br>cluster.server-quorum-type: server<br>cluster.quorum-type: auto<br>cluster.eager-lock: enable<br>network.remote-dio: off<br>performance.low-prio-threads: 32<br>performance.io-cache: off<br>performance.read-ahead: off<br>performance.quick-read: off<br>transport.address-family: inet<br>nfs.disable: on<br>performance.client-io-threads: off<br></span></span><div><span><span>cluster.enable-shared-storage: enable</span><br></span></div><div><br></div><div><div>Any issues expected when downgrading the version ?</div><div><br></div><div>Best Regards,</div><div>Strahil Nikolov</div><div><br></div><span></span></div></div><div><br></div>
        
        </div><div id="ydp69cb63d9yahoo_quoted_6664587736" class="ydp69cb63d9yahoo_quoted">
            <div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
                
                <div>
                    В понеделник, 22 април 2019 г., 0:26:51 ч. Гринуич-4, Strahil &lt;hunter86_bg@yahoo.com&gt; написа:
                </div>
                <div><br></div>
                <div><br></div>
                <div><div id="ydp69cb63d9yiv2137687339"><p dir="ltr">Hello Community,</p>
<p dir="ltr">I have been left with the impression that FUSE mounts will read from both local and remote bricks , is that right?</p>
<p dir="ltr">I'm using oVirt as a hyperconverged setup and despite my slow network (currently 1 gbit/s, will be expanded soon), I was expecting that at least the reads from the local brick will be fast, yet I can't reach more than 250 MB/s while the 2 data bricks are NVME with much higher capabilities.</p>
<p dir="ltr">Is there something I can do about that ?<br>
Maybe change cluster.choose-local, as I don't see it on my other volumes ?<br>
What are the risks associated with that?</p>
<p dir="ltr">Volume Name: data_fast<br>
Type: Replicate<br>
Volume ID: b78aa52a-4c49-407d-bfd8-fdffb2a3610a<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x (2 + 1) = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: ovirt1:/gluster_bricks/data_fast/data_fast<br>
Brick2: ovirt2:/gluster_bricks/data_fast/data_fast<br>
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)<br>
Options Reconfigured:<br>
performance.client-io-threads: off<br>
nfs.disable: on<br>
transport.address-family: inet<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
<a href="http://performance.io" rel="nofollow" target="_blank">performance.io</a>-cache: off<br>
performance.low-prio-threads: 32<br>
network.remote-dio: off<br>
cluster.eager-lock: enable<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
cluster.data-self-heal-algorithm: full<br>
cluster.locking-scheme: granular<br>
cluster.shd-max-threads: 8<br>
cluster.shd-wait-qlength: 10000<br>
features.shard: on<br>
user.cifs: off<br>
cluster.choose-local: off<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
performance.strict-o-direct: on<br>
cluster.granular-entry-heal: enable<br>
network.ping-timeout: 30<br>
cluster.enable-shared-storage: enable<br></p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov<br>
</p>
</div>Hello Community,<br><br>I have been left with the impression that FUSE mounts will read from both local and remote bricks , is that right?<br><br>I'm using oVirt as a hyperconverged setup and despite my slow network (currently 1 gbit/s, will be expanded soon), I was expecting that at least the reads from the local brick will be fast, yet I can't reach more than 250 MB/s while the 2 data bricks are NVME with much higher capabilities.<br><br>Is there something I can do about that ?<br>Maybe change cluster.choose-local, as I don't see it on my other volumes ?<br>What are the risks associated with that?<br><br>Volume Name: data_fast<br>Type: Replicate<br>Volume ID: b78aa52a-4c49-407d-bfd8-fdffb2a3610a<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (2 + 1) = 3<br>Transport-type: tcp<br>Bricks:<br>Brick1: ovirt1:/gluster_bricks/data_fast/data_fast<br>Brick2: ovirt2:/gluster_bricks/data_fast/data_fast<br>Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)<br>Options Reconfigured:<br>performance.client-io-threads: off<br>nfs.disable: on<br>transport.address-family: inet<br>performance.quick-read: off<br>performance.read-ahead: off<br>performance.io-cache: off<br>performance.low-prio-threads: 32<br>network.remote-dio: off<br>cluster.eager-lock: enable<br>cluster.quorum-type: auto<br>cluster.server-quorum-type: server<br>cluster.data-self-heal-algorithm: full<br>cluster.locking-scheme: granular<br>cluster.shd-max-threads: 8<br>cluster.shd-wait-qlength: 10000<br>features.shard: on<br>user.cifs: off<br>cluster.choose-local: off<br>storage.owner-uid: 36<br>storage.owner-gid: 36<br>performance.strict-o-direct: on<br>cluster.granular-entry-heal: enable<br>network.ping-timeout: 30<br>cluster.enable-shared-storage: enable<br><br><br>Best Regards,<br>Strahil Nikolov<br></div>
            </div>
        </div></body></html>