<p dir="ltr">I've set 'cluster.choose-local: on' and the sequential read is aprox 550MB/s , but this is far below the 1.3G I have observed with gluster v5.5 .<br>
Should I consider it a bug, or some options need to be changed ?</p>
<p dir="ltr">What about rolling back? I've tried to roll back one of my nodes, but it never came back until I have upgraded to 5.6 .<br>
Maybe a full offline downgrade could work...</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
<div class="quote">On Apr 22, 2019 17:18, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:<br type='attribution'><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:'courier new' , 'courier' , 'monaco' , monospace , sans-serif;font-size:16px"><div></div>
<div>As I had the option to rebuild the volume - I did it and it still reads quite slower than before 5.6 upgrade.</div><div><br /></div><div>I have set cluster.choose-local to 'on' but still the same performance.</div><div><br /></div><div>Volume Name: data_fast<br />Type: Replicate<br />Volume ID: 888a32ea-9b5c-4001-a9c5-8bc7ee0bddce<br />Status: Started<br />Snapshot Count: 0<br />Number of Bricks: 1 x (2 + 1) = 3<br />Transport-type: tcp<br />Bricks:<br />Brick1: ovirt1:/gluster_bricks/data_fast/data_fast<br />Brick2: ovirt2:/gluster_bricks/data_fast/data_fast<br />Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)<br />Options Reconfigured:<br />cluster.choose-local: on<br />network.ping-timeout: 30<br />cluster.granular-entry-heal: enable<br />performance.strict-o-direct: on<br />storage.owner-gid: 36<br />storage.owner-uid: 36<br />user.cifs: off<br />features.shard: on<br />cluster.shd-wait-qlength: 10000<br />cluster.shd-max-threads: 8<br />cluster.locking-scheme: granular<br />cluster.data-self-heal-algorithm: full<br />cluster.server-quorum-type: server<br />cluster.quorum-type: auto<br />cluster.eager-lock: enable<br />network.remote-dio: off<br />performance.low-prio-threads: 32<br /><a href="http://performance.io">performance.io</a>-cache: off<br />performance.read-ahead: off<br />performance.quick-read: off<br />transport.address-family: inet<br />nfs.disable: on<br />performance.client-io-threads: off<br /><div>cluster.enable-shared-storage: enable<br /></div><div><br /></div><div><div>Any issues expected when downgrading the version ?</div><div><br /></div><div>Best Regards,</div><div>Strahil Nikolov</div><div><br /></div></div></div><div><br /></div>
</div><div>
<div style="font-family:'helvetica neue' , 'helvetica' , 'arial' , sans-serif;font-size:13px;color:#26282a">
<div>
В понеделник, 22 април 2019 г., 0:26:51 ч. Гринуич-4, Strahil <hunter86_bg@yahoo.com> написа:
</div>
<div><br /></div>
<div><br /></div>
<div><div><p dir="ltr">Hello Community,</p>
<p dir="ltr">I have been left with the impression that FUSE mounts will read from both local and remote bricks , is that right?</p>
<p dir="ltr">I'm using oVirt as a hyperconverged setup and despite my slow network (currently 1 gbit/s, will be expanded soon), I was expecting that at least the reads from the local brick will be fast, yet I can't reach more than 250 MB/s while the 2 data bricks are NVME with much higher capabilities.</p>
<p dir="ltr">Is there something I can do about that ?<br />
Maybe change cluster.choose-local, as I don't see it on my other volumes ?<br />
What are the risks associated with that?</p>
<p dir="ltr">Volume Name: data_fast<br />
Type: Replicate<br />
Volume ID: b78aa52a-4c49-407d-bfd8-fdffb2a3610a<br />
Status: Started<br />
Snapshot Count: 0<br />
Number of Bricks: 1 x (2 + 1) = 3<br />
Transport-type: tcp<br />
Bricks:<br />
Brick1: ovirt1:/gluster_bricks/data_fast/data_fast<br />
Brick2: ovirt2:/gluster_bricks/data_fast/data_fast<br />
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)<br />
Options Reconfigured:<br />
performance.client-io-threads: off<br />
nfs.disable: on<br />
transport.address-family: inet<br />
performance.quick-read: off<br />
performance.read-ahead: off<br />
<a href="http://performance.io">performance.io</a>-cache: off<br />
performance.low-prio-threads: 32<br />
network.remote-dio: off<br />
cluster.eager-lock: enable<br />
cluster.quorum-type: auto<br />
cluster.server-quorum-type: server<br />
cluster.data-self-heal-algorithm: full<br />
cluster.locking-scheme: granular<br />
cluster.shd-max-threads: 8<br />
cluster.shd-wait-qlength: 10000<br />
features.shard: on<br />
user.cifs: off<br />
cluster.choose-local: off<br />
storage.owner-uid: 36<br />
storage.owner-gid: 36<br />
performance.strict-o-direct: on<br />
cluster.granular-entry-heal: enable<br />
network.ping-timeout: 30<br />
cluster.enable-shared-storage: enable<br /></p>
<p dir="ltr">Best Regards,<br />
Strahil Nikolov<br />
</p>
</div>Hello Community,<br /><br />I have been left with the impression that FUSE mounts will read from both local and remote bricks , is that right?<br /><br />I'm using oVirt as a hyperconverged setup and despite my slow network (currently 1 gbit/s, will be expanded soon), I was expecting that at least the reads from the local brick will be fast, yet I can't reach more than 250 MB/s while the</div></div></div></div></blockquote></div>