[Gluster-users] Gluster 5.6 slow read despite fast local brick

Strahil hunter86_bg at yahoo.com
Mon Apr 22 18:00:56 UTC 2019


I've set 'cluster.choose-local: on' and the sequential read is aprox 550MB/s , but this is far below the 1.3G I have observed with gluster v5.5 .
Should I consider it a bug, or some options need to be changed ?

What about rolling back? I've tried to roll back one of my nodes, but it never came back until I have upgraded to 5.6 .
Maybe a full offline downgrade could work...

Best Regards,
Strahil NikolovOn Apr 22, 2019 17:18, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
>
> As I had the option to rebuild the volume - I did it and it still reads quite slower than before 5.6 upgrade.
>
> I have set cluster.choose-local to 'on' but still the same performance.
>
> Volume Name: data_fast
> Type: Replicate
> Volume ID: 888a32ea-9b5c-4001-a9c5-8bc7ee0bddce
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1:/gluster_bricks/data_fast/data_fast
> Brick2: ovirt2:/gluster_bricks/data_fast/data_fast
> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
> Options Reconfigured:
> cluster.choose-local: on
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
> performance.strict-o-direct: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> user.cifs: off
> features.shard: on
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> cluster.enable-shared-storage: enable
>
> Any issues expected when downgrading the version ?
>
> Best Regards,
> Strahil Nikolov
>
>
> В понеделник, 22 април 2019 г., 0:26:51 ч. Гринуич-4, Strahil <hunter86_bg at yahoo.com> написа:
>
>
> Hello Community,
>
> I have been left with the impression that FUSE mounts will read from both local and remote bricks , is that right?
>
> I'm using oVirt as a hyperconverged setup and despite my slow network (currently 1 gbit/s, will be expanded soon), I was expecting that at least the reads from the local brick will be fast, yet I can't reach more than 250 MB/s while the 2 data bricks are NVME with much higher capabilities.
>
> Is there something I can do about that ?
> Maybe change cluster.choose-local, as I don't see it on my other volumes ?
> What are the risks associated with that?
>
> Volume Name: data_fast
> Type: Replicate
> Volume ID: b78aa52a-4c49-407d-bfd8-fdffb2a3610a
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1:/gluster_bricks/data_fast/data_fast
> Brick2: ovirt2:/gluster_bricks/data_fast/data_fast
> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
> Options Reconfigured:
> performance.client-io-threads: off
> nfs.disable: on
> transport.address-family: inet
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 10000
> features.shard: on
> user.cifs: off
> cluster.choose-local: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> performance.strict-o-direct: on
> cluster.granular-entry-heal: enable
> network.ping-timeout: 30
> cluster.enable-shared-storage: enable
>
> Best Regards,
> Strahil Nikolov
>
> Hello Community,
>
> I have been left with the impression that FUSE mounts will read from both local and remote bricks , is that right?
>
> I'm using oVirt as a hyperconverged setup and despite my slow network (currently 1 gbit/s, will be expanded soon), I was expecting that at least the reads from the local brick will be fast, yet I can't reach more than 250 MB/s while the
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190422/3a159f1a/attachment.html>


More information about the Gluster-users mailing list