[Gluster-users] Gluster 5.5 slower than 3.12.15

Strahil hunter86_bg at yahoo.com
Thu Apr 4 07:11:09 UTC 2019


Hi Amar,

I would like to test Cluster v6 , but as I'm quite new to oVirt - I'm not sure if oVirt <-> Gluster will communicate  properly

Did anyone test rollback from v6 to v5.5 ? If rollback is possible - I would be happy to give it a try.

Best Regards,
Strahil NikolovOn Apr 3, 2019 11:35, Amar Tumballi Suryanarayan <atumball at redhat.com> wrote:
>
> Strahil,
>
> With some basic testing, we are noticing the similar behavior too.
>
> One of the issue we identified was increased n/w usage in 5.x series (being addressed by https://review.gluster.org/#/c/glusterfs/+/22404/), and there are few other features which write extended attributes which caused some delay.
>
> We are in the process of publishing some numbers with release-3.12.x, release-5 and release-6 comparison soon. With some numbers we are already seeing release-6 currently is giving really good performance in many configurations, specially for 1x3 replicate volume type.
>
> While we continue to identify and fix issues in 5.x series, one of the request is to validate release-6.x (6.0 or 6.1 which would happen on April 10th), so you can see the difference in your workload.
>
> Regards,
> Amar
>
>
>
> On Wed, Apr 3, 2019 at 5:57 AM Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
>>
>> Hi Community,
>>
>> I have the feeling that with gluster v5.5 I have poorer performance than it used to be on 3.12.15. Did you observe something like that?
>>
>> I have a 3 node Hyperconverged Cluster (ovirt + glusterfs with replica 3 arbiter1 volumes) with NFS Ganesha and since I have upgraded to v5 - the issues came up.
>> First it was 5.3 notorious experience and now with 5.5 - my sanlock is having problems and higher latency than it used to be. I have switched from NFS-Ganesha to pure FUSE , but the latency problems do not go away.
>>
>> Of course , this is partially due to the consumer hardware, but as the hardware has not changed I was hoping that the performance will remain as is.
>>
>> So, do you expect 5.5 to perform less than 3.12 ?
>>
>> Some info:
>> Volume Name: engine
>> Type: Replicate
>> Volume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099ded
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1:/gluster_bricks/engine/engine
>> Brick2: ovirt2:/gluster_bricks/engine/engine
>> Brick3: ovirt3:/gluster_bricks/engine/engine (arbiter)
>> Options Reconfigured:
>> performance.client-io-threads: off
>> nfs.disable: on
>> transport.address-family: inet
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.low-prio-threads: 32
>> network.remote-dio: off
>> cluster.eager-lock: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 10000
>> features.shard: on
>> user.cifs: off
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> network.ping-timeout: 30
>> performance.strict-o-direct: on
>> cluster.granular-entry-heal: enable
>> cluster.enable-shared-storage: enable
>>
>> Network: 1 gbit/s
>>
>> Filesystem:XFS
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> -- 
> Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190404/270760dd/attachment.html>


More information about the Gluster-users mailing list