[Gluster-users] Gluster 5.5 slower than 3.12.15
Strahil Nikolov
hunter86_bg at yahoo.com
Wed Apr 3 00:26:09 UTC 2019
Hi Community,
I have the feeling that with gluster v5.5 I have poorer performance than it used to be on 3.12.15. Did you observe something like that?
I have a 3 node Hyperconverged Cluster (ovirt + glusterfs with replica 3 arbiter1 volumes) with NFS Ganesha and since I have upgraded to v5 - the issues came up.First it was 5.3 notorious experience and now with 5.5 - my sanlock is having problems and higher latency than it used to be. I have switched from NFS-Ganesha to pure FUSE , but the latency problems do not go away.
Of course , this is partially due to the consumer hardware, but as the hardware has not changed I was hoping that the performance will remain as is.
So, do you expect 5.5 to perform less than 3.12 ?
Some info:Volume Name: engineType: ReplicateVolume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099dedStatus: StartedSnapshot Count: 0Number of Bricks: 1 x (2 + 1) = 3Transport-type: tcpBricks:Brick1: ovirt1:/gluster_bricks/engine/engineBrick2: ovirt2:/gluster_bricks/engine/engineBrick3: ovirt3:/gluster_bricks/engine/engine (arbiter)Options Reconfigured:performance.client-io-threads: offnfs.disable: ontransport.address-family: inetperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.low-prio-threads: 32network.remote-dio: offcluster.eager-lock: enablecluster.quorum-type: autocluster.server-quorum-type: servercluster.data-self-heal-algorithm: fullcluster.locking-scheme: granularcluster.shd-max-threads: 8cluster.shd-wait-qlength: 10000features.shard: onuser.cifs: offstorage.owner-uid: 36storage.owner-gid: 36network.ping-timeout: 30performance.strict-o-direct: oncluster.granular-entry-heal: enablecluster.enable-shared-storage: enable
Network: 1 gbit/s
Filesystem:XFS
Best Regards,Strahil Nikolov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190403/dfbeafba/attachment.html>
More information about the Gluster-users
mailing list