[Gluster-users] gluster performance and new implementation
Raghavendra Gowdappa
rgowdapp at redhat.com
Mon Jul 23 12:06:21 UTC 2018
I doubt whether it will make a big difference, but you can turn on
performance.flush-behind.
On Mon, Jul 23, 2018 at 4:51 PM, Γιώργος Βασιλόπουλος <g.vasilopoulos at uoc.gr
> wrote:
> Hello
>
> I have set up an expirimental gluster replica 3 arbiter 1 volume for ovirt.
>
> Network between gluster servers is 2x1G (mode4 bonding) and network on
> ovirt is 1G
>
> I'm observing write performance is at about 30-35Mbytes /sec. Is this
> normal?
>
> I was expecting about 45-55MB/sec on write given that write speed should
> be network throughput/2.
>
> Is this expected given the fact that there is an arbiter volume in place ?
>
> Is network throughput/3 like a true replica 3 volume when there is an
> arbiter brick ?
>
> It seems that small or large files have little impact on performance
> currently.
>
> My volume settings are as follow
>
> Volume Name: gv01
> Type: Replicate
> Volume ID: 3396f285-ba1e-4360-94a9-5cc65ede62c9
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1.datacenter.uoc.gr:/gluster_bricks/brick00
> Brick2: gluster2.datacenter.uoc.gr:/gluster_bricks/brick00
> Brick3: gluster3.datacenter.uoc.gr:/gluster_bricks/brick00 (arbiter)
> Options Reconfigured:
> server.allow-insecure: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> auth.allow: xxx.xxx.xxx.*,xxx.xxx.xxx.*
> user.cifs: off
> features.shard: on
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: on
> performance.low-prio-threads: 32
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: on
> performance.cache-size: 192MB
> performance.write-behind-window-size: 524288
> features.shard-block-size: 64MB
> cluster.eager-lock: enable
> diagnostics.brick-log-level: WARNING
> diagnostics.client-log-level: WARNING
> performance.cache-refresh-timeout: 4
> performance.strict-write-ordering: off
> performance.flush-behind: off
> network.inode-lru-limit: 10000
> performance.md-cache-timeout: 600
> performance.cache-invalidation: on
> features.cache-invalidation-timeout: 600
> features.cache-invalidation: on
> client.event-threads: 6
> server.event-threads: 6
> performance.stat-prefetch: on
> performance.parallel-readdir: false
> cluster.use-compound-fops: true
> cluster.readdir-optimize: true
> cluster.lookup-optimize: true
> cluster.self-heal-daemon: enable
> network.ping-timeout: 30
> performance.strict-o-direct: on
> cluster.granular-entry-heal: enable
> cluster.server-quorum-ratio: 51
>
>
> Regards
>
> George Vasilopoulos
>
> Systems administrator UCNET University of Crete
>
> --
> Βασιλόπουλος Γιώργος
> Ηλεκτρολόγος Μηχανικός Τ.Ε.
> Διαχειριστής Υπολ. Συστημάτων
>
> Πανεπιστήμιο Κρήτης
> Κ.Υ.Υ.Τ.Π.Ε.
> Τμήμα Επικοινωνιών και Δικτύων
> Βούτες Ηρακλείου 70013
> Τηλ : 2810393310
> email : g.vasilopoulos at uoc.gr
> http://www.ucnet.uoc.gr
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180723/de7f8c8a/attachment.html>
More information about the Gluster-users
mailing list