<html><head></head><body><div class="ydpf04ca72ayahoo-style-wrap" style="font-family:courier new, courier, monaco, monospace, sans-serif;font-size:16px;"><div></div>
<div dir="ltr" data-setdir="false">Can you try with a fresh replica volume with 'virt' group applied ?</div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">Best Regards,</div><div dir="ltr" data-setdir="false">Strahil Nikolov</div><div><br></div>
</div><div id="ydp74d63e68yahoo_quoted_2867254955" class="ydp74d63e68yahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
В сряда, 3 юли 2019 г., 19:18:18 ч. Гринуич+3, Vladimir Melnik <v.melnik@tucha.ua> написа:
</div>
<div><br></div>
<div><br></div>
<div>Thank you, it helped a little:<br clear="none"><br clear="none">$ for i in {1..5}; do { dd if=/dev/zero of=/mnt/glusterfs1/test.tmp bs=1M count=10 oflag=sync; rm -f /mnt/glusterfs1/test.tmp; } done 2>&1 | grep copied<br clear="none">10485760 bytes (10 MB) copied, 0.738968 s, 14.2 MB/s<br clear="none">10485760 bytes (10 MB) copied, 0.725296 s, 14.5 MB/s<br clear="none">10485760 bytes (10 MB) copied, 0.681508 s, 15.4 MB/s<br clear="none">10485760 bytes (10 MB) copied, 0.85566 s, 12.3 MB/s<br clear="none">10485760 bytes (10 MB) copied, 0.661457 s, 15.9 MB/s<br clear="none"><br clear="none">But 14-15 MB/s is still quite far from the actual storage's performance (200-3000 MB/s). :-(<br clear="none"><br clear="none">Here's full configuration dump (just in case):<br clear="none"><br clear="none">Option Value<br clear="none">------ -----<br clear="none">cluster.lookup-unhashed on<br clear="none">cluster.lookup-optimize on<br clear="none">cluster.min-free-disk 10%<br clear="none">cluster.min-free-inodes 5%<br clear="none">cluster.rebalance-stats off<br clear="none">cluster.subvols-per-directory (null)<br clear="none">cluster.readdir-optimize off<br clear="none">cluster.rsync-hash-regex (null)<br clear="none">cluster.extra-hash-regex (null)<br clear="none">cluster.dht-xattr-name trusted.glusterfs.dht<br clear="none">cluster.randomize-hash-range-by-gfid off<br clear="none">cluster.rebal-throttle normal<br clear="none">cluster.lock-migration off<br clear="none">cluster.force-migration off<br clear="none">cluster.local-volume-name (null)<br clear="none">cluster.weighted-rebalance on<br clear="none">cluster.switch-pattern (null)<br clear="none">cluster.entry-change-log on<br clear="none">cluster.read-subvolume (null)<br clear="none">cluster.read-subvolume-index -1<br clear="none">cluster.read-hash-mode 1<br clear="none">cluster.background-self-heal-count 8<br clear="none">cluster.metadata-self-heal off<br clear="none">cluster.data-self-heal off<br clear="none">cluster.entry-self-heal off<br clear="none">cluster.self-heal-daemon on<br clear="none">cluster.heal-timeout 600<br clear="none">cluster.self-heal-window-size 1<br clear="none">cluster.data-change-log on<br clear="none">cluster.metadata-change-log on<br clear="none">cluster.data-self-heal-algorithm full<br clear="none">cluster.eager-lock enable<br clear="none">disperse.eager-lock on<br clear="none">disperse.other-eager-lock on<br clear="none">disperse.eager-lock-timeout 1<br clear="none">disperse.other-eager-lock-timeout 1<br clear="none">cluster.quorum-type auto<br clear="none">cluster.quorum-count (null)<br clear="none">cluster.choose-local off<br clear="none">cluster.self-heal-readdir-size 1KB<br clear="none">cluster.post-op-delay-secs 1<br clear="none">cluster.ensure-durability on<br clear="none">cluster.consistent-metadata no<br clear="none">cluster.heal-wait-queue-length 128<br clear="none">cluster.favorite-child-policy none<br clear="none">cluster.full-lock yes<br clear="none">diagnostics.latency-measurement off<br clear="none">diagnostics.dump-fd-stats off<br clear="none">diagnostics.count-fop-hits off<br clear="none">diagnostics.brick-log-level INFO<br clear="none">diagnostics.client-log-level INFO<br clear="none">diagnostics.brick-sys-log-level CRITICAL<br clear="none">diagnostics.client-sys-log-level CRITICAL<br clear="none">diagnostics.brick-logger (null)<br clear="none">diagnostics.client-logger (null)<br clear="none">diagnostics.brick-log-format (null)<br clear="none">diagnostics.client-log-format (null)<br clear="none">diagnostics.brick-log-buf-size 5<br clear="none">diagnostics.client-log-buf-size 5<br clear="none">diagnostics.brick-log-flush-timeout 120<br clear="none">diagnostics.client-log-flush-timeout 120<br clear="none">diagnostics.stats-dump-interval 0<br clear="none">diagnostics.fop-sample-interval 0<br clear="none">diagnostics.stats-dump-format json<br clear="none">diagnostics.fop-sample-buf-size 65535<br clear="none">diagnostics.stats-dnscache-ttl-sec 86400<br clear="none">performance.cache-max-file-size 0<br clear="none">performance.cache-min-file-size 0<br clear="none">performance.cache-refresh-timeout 1<br clear="none">performance.cache-priority<br clear="none">performance.cache-size 32MB<br clear="none">performance.io-thread-count 16<br clear="none">performance.high-prio-threads 16<br clear="none">performance.normal-prio-threads 16<br clear="none">performance.low-prio-threads 32<br clear="none">performance.least-prio-threads 1<br clear="none">performance.enable-least-priority on<br clear="none">performance.iot-watchdog-secs (null)<br clear="none">performance.iot-cleanup-disconnected-reqsoff<br clear="none">performance.iot-pass-through false<br clear="none">performance.io-cache-pass-through false<br clear="none">performance.cache-size 128MB<br clear="none">performance.qr-cache-timeout 1<br clear="none">performance.cache-invalidation false<br clear="none">performance.ctime-invalidation false<br clear="none">performance.flush-behind on<br clear="none">performance.nfs.flush-behind on<br clear="none">performance.write-behind-window-size 1MB<br clear="none">performance.resync-failed-syncs-after-fsyncoff<br clear="none">performance.nfs.write-behind-window-size1MB<br clear="none">performance.strict-o-direct off<br clear="none">performance.nfs.strict-o-direct off<br clear="none">performance.strict-write-ordering off<br clear="none">performance.nfs.strict-write-ordering off<br clear="none">performance.write-behind-trickling-writeson<br clear="none">performance.aggregate-size 128KB<br clear="none">performance.nfs.write-behind-trickling-writeson<br clear="none">performance.lazy-open yes<br clear="none">performance.read-after-open yes<br clear="none">performance.open-behind-pass-through false<br clear="none">performance.read-ahead-page-count 4<br clear="none">performance.read-ahead-pass-through false<br clear="none">performance.readdir-ahead-pass-through false<br clear="none">performance.md-cache-pass-through false<br clear="none">performance.md-cache-timeout 1<br clear="none">performance.cache-swift-metadata true<br clear="none">performance.cache-samba-metadata false<br clear="none">performance.cache-capability-xattrs true<br clear="none">performance.cache-ima-xattrs true<br clear="none">performance.md-cache-statfs off<br clear="none">performance.xattr-cache-list<br clear="none">performance.nl-cache-pass-through false<br clear="none">features.encryption off<br clear="none">network.frame-timeout 1800<br clear="none">network.ping-timeout 42<br clear="none">network.tcp-window-size (null)<br clear="none">client.ssl off<br clear="none">network.remote-dio enable<br clear="none">client.event-threads 4<br clear="none">client.tcp-user-timeout 0<br clear="none">client.keepalive-time 20<br clear="none">client.keepalive-interval 2<br clear="none">client.keepalive-count 9<br clear="none">network.tcp-window-size (null)<br clear="none">network.inode-lru-limit 16384<br clear="none">auth.allow *<br clear="none">auth.reject (null)<br clear="none">transport.keepalive 1<br clear="none">server.allow-insecure on<br clear="none">server.root-squash off<br clear="none">server.all-squash off<br clear="none">server.anonuid 65534<br clear="none">server.anongid 65534<br clear="none">server.statedump-path /var/run/gluster<br clear="none">server.outstanding-rpc-limit 64<br clear="none">server.ssl off<br clear="none">auth.ssl-allow *<br clear="none">server.manage-gids off<br clear="none">server.dynamic-auth on<br clear="none">client.send-gids on<br clear="none">server.gid-timeout 300<br clear="none">server.own-thread (null)<br clear="none">server.event-threads 4<br clear="none">server.tcp-user-timeout 42<br clear="none">server.keepalive-time 20<br clear="none">server.keepalive-interval 2<br clear="none">server.keepalive-count 9<br clear="none">transport.listen-backlog 1024<br clear="none">transport.address-family inet<br clear="none">performance.write-behind on<br clear="none">performance.read-ahead off<br clear="none">performance.readdir-ahead on<br clear="none">performance.io-cache off<br clear="none">performance.open-behind on<br clear="none">performance.quick-read off<br clear="none">performance.nl-cache off<br clear="none">performance.stat-prefetch on<br clear="none">performance.client-io-threads on<br clear="none">performance.nfs.write-behind on<br clear="none">performance.nfs.read-ahead off<br clear="none">performance.nfs.io-cache off<br clear="none">performance.nfs.quick-read off<br clear="none">performance.nfs.stat-prefetch off<br clear="none">performance.nfs.io-threads off<br clear="none">performance.force-readdirp true<br clear="none">performance.cache-invalidation false<br clear="none">performance.global-cache-invalidation true<br clear="none">features.uss off<br clear="none">features.snapshot-directory .snaps<br clear="none">features.show-snapshot-directory off<br clear="none">features.tag-namespaces off<br clear="none">network.compression off<br clear="none">network.compression.window-size -15<br clear="none">network.compression.mem-level 8<br clear="none">network.compression.min-size 0<br clear="none">network.compression.compression-level -1<br clear="none">network.compression.debug false<br clear="none">features.default-soft-limit 80%<br clear="none">features.soft-timeout 60<br clear="none">features.hard-timeout 5<br clear="none">features.alert-time 86400<br clear="none">features.quota-deem-statfs off<br clear="none">geo-replication.indexing off<br clear="none">geo-replication.indexing off<br clear="none">geo-replication.ignore-pid-check off<br clear="none">geo-replication.ignore-pid-check off<br clear="none">features.quota off<br clear="none">features.inode-quota off<br clear="none">features.bitrot disable<br clear="none">debug.trace off<br clear="none">debug.log-history no<br clear="none">debug.log-file no<br clear="none">debug.exclude-ops (null)<br clear="none">debug.include-ops (null)<br clear="none">debug.error-gen off<br clear="none">debug.error-failure (null)<br clear="none">debug.error-number (null)<br clear="none">debug.random-failure off<br clear="none">debug.error-fops (null)<br clear="none">nfs.disable on<br clear="none">features.read-only off<br clear="none">features.worm off<br clear="none">features.worm-file-level off<br clear="none">features.worm-files-deletable on<br clear="none">features.default-retention-period 120<br clear="none">features.retention-mode relax<br clear="none">features.auto-commit-period 180<br clear="none">storage.linux-aio off<br clear="none">storage.batch-fsync-mode reverse-fsync<br clear="none">storage.batch-fsync-delay-usec 0<br clear="none">storage.owner-uid -1<br clear="none">storage.owner-gid -1<br clear="none">storage.node-uuid-pathinfo off<br clear="none">storage.health-check-interval 30<br clear="none">storage.build-pgfid off<br clear="none">storage.gfid2path on<br clear="none">storage.gfid2path-separator :<br clear="none">storage.reserve 1<br clear="none">storage.health-check-timeout 10<br clear="none">storage.fips-mode-rchecksum off<br clear="none">storage.force-create-mode 0000<br clear="none">storage.force-directory-mode 0000<br clear="none">storage.create-mask 0777<br clear="none">storage.create-directory-mask 0777<br clear="none">storage.max-hardlinks 100<br clear="none">features.ctime on<br clear="none">config.gfproxyd off<br clear="none">cluster.server-quorum-type server<br clear="none">cluster.server-quorum-ratio 0<br clear="none">changelog.changelog off<br clear="none">changelog.changelog-dir {{ brick.path }}/.glusterfs/changelogs<br clear="none">changelog.encoding ascii<br clear="none">changelog.rollover-time 15<br clear="none">changelog.fsync-interval 5<br clear="none">changelog.changelog-barrier-timeout 120<br clear="none">changelog.capture-del-path off<br clear="none">features.barrier disable<br clear="none">features.barrier-timeout 120<br clear="none">features.trash off<br clear="none">features.trash-dir .trashcan<br clear="none">features.trash-eliminate-path (null)<br clear="none">features.trash-max-filesize 5MB<br clear="none">features.trash-internal-op off<br clear="none">cluster.enable-shared-storage disable<br clear="none">locks.trace off<br clear="none">locks.mandatory-locking off<br clear="none">cluster.disperse-self-heal-daemon enable<br clear="none">cluster.quorum-reads no<br clear="none">client.bind-insecure (null)<br clear="none">features.shard on<br clear="none">features.shard-block-size 64MB<br clear="none">features.shard-lru-limit 16384<br clear="none">features.shard-deletion-rate 100<br clear="none">features.scrub-throttle lazy<br clear="none">features.scrub-freq biweekly<br clear="none">features.scrub false<br clear="none">features.expiry-time 120<br clear="none">features.cache-invalidation off<br clear="none">features.cache-invalidation-timeout 60<br clear="none">features.leases off<br clear="none">features.lease-lock-recall-timeout 60<br clear="none">disperse.background-heals 8<br clear="none">disperse.heal-wait-qlength 128<br clear="none">cluster.heal-timeout 600<br clear="none">dht.force-readdirp on<br clear="none">disperse.read-policy gfid-hash<br clear="none">cluster.shd-max-threads 8<br clear="none">cluster.shd-wait-qlength 10000<br clear="none">cluster.shd-wait-qlength 10000<br clear="none">cluster.locking-scheme granular<br clear="none">cluster.granular-entry-heal no<br clear="none">features.locks-revocation-secs 0<br clear="none">features.locks-revocation-clear-all false<br clear="none">features.locks-revocation-max-blocked 0<br clear="none">features.locks-monkey-unlocking false<br clear="none">features.locks-notify-contention no<br clear="none">features.locks-notify-contention-delay 5<br clear="none">disperse.shd-max-threads 1<br clear="none">disperse.shd-wait-qlength 1024<br clear="none">disperse.cpu-extensions auto<br clear="none">disperse.self-heal-window-size 1<br clear="none">cluster.use-compound-fops off<br clear="none">performance.parallel-readdir off<br clear="none">performance.rda-request-size 131072<br clear="none">performance.rda-low-wmark 4096<br clear="none">performance.rda-high-wmark 128KB<br clear="none">performance.rda-cache-limit 10MB<br clear="none">performance.nl-cache-positive-entry false<br clear="none">performance.nl-cache-limit 10MB<br clear="none">performance.nl-cache-timeout 60<br clear="none">cluster.brick-multiplex off<br clear="none">cluster.max-bricks-per-process 250<br clear="none">disperse.optimistic-change-log on<br clear="none">disperse.stripe-cache 4<br clear="none">cluster.halo-enabled False<br clear="none">cluster.halo-shd-max-latency 99999<br clear="none">cluster.halo-nfsd-max-latency 5<br clear="none">cluster.halo-max-latency 5<br clear="none">cluster.halo-max-replicas 99999<br clear="none">cluster.halo-min-replicas 2<br clear="none">features.selinux on<br clear="none">cluster.daemon-log-level INFO<br clear="none">debug.delay-gen off<br clear="none">delay-gen.delay-percentage 10%<br clear="none">delay-gen.delay-duration 100000<br clear="none">delay-gen.enable<br clear="none">disperse.parallel-writes on<br clear="none">features.sdfs off<br clear="none">features.cloudsync off<br clear="none">features.ctime on<br clear="none">ctime.noatime on<br clear="none">feature.cloudsync-storetype (null)<br clear="none">features.enforce-mandatory-lock off<br clear="none"><br clear="none">What do you think, are there any other knobs worth to be turned?<br clear="none"><br clear="none">Thanks!<br clear="none"><div class="ydp74d63e68yqt9133413676" id="ydp74d63e68yqtfd61412"><br clear="none">On Wed, Jul 03, 2019 at 06:55:09PM +0300, Strahil wrote:<br clear="none">> Check the following link (4.1) for the optimal gluster volume settings.<br clear="none">> They are quite safe.<br clear="none">> <br clear="none">> Gluster provides a group called virt (/var/lib/glusterd/groups/virt) and can be applied via 'gluster volume set VOLNAME group virt'<br clear="none">> <br clear="none">> Then try again.<br clear="none">> <br clear="none">> Best Regards,<br clear="none">> Strahil NikolovOn Jul 3, 2019 11:39, Vladimir Melnik <<a shape="rect" href="mailto:v.melnik@tucha.ua" rel="nofollow" target="_blank">v.melnik@tucha.ua</a>> wrote:<br clear="none">> ><br clear="none">> > Dear colleagues, <br clear="none">> ><br clear="none">> > I have a lab with a bunch of virtual machines (the virtualization is <br clear="none">> > provided by KVM) running on the same physical host. 4 of these VMs are <br clear="none">> > working as a GlusterFS cluster and there's one more VM that works as a <br clear="none">> > client. I'll specify all the packages' versions in the ending of this <br clear="none">> > message. <br clear="none">> ><br clear="none">> > I created 2 volumes - one is having type "Distributed-Replicate" and <br clear="none">> > another one is "Distribute". The problem is that both of volumes are <br clear="none">> > showing really poor performance. <br clear="none">> ><br clear="none">> > Here's what I see on the client: <br clear="none">> > $ mount | grep gluster <br clear="none">> > 10.13.1.16:storage1 on /mnt/glusterfs1 type fuse.glusterfs(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) <br clear="none">> > 10.13.1.16:storage2 on /mnt/glusterfs2 type fuse.glusterfs(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) <br clear="none">> ><br clear="none">> > $ for i in {1..5}; do { dd if=/dev/zero of=/mnt/glusterfs1/test.tmp bs=1M count=10 oflag=sync; rm -f /mnt/glusterfs1/test.tmp; } done <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 1.47936 s, 7.1 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 1.62546 s, 6.5 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 1.71229 s, 6.1 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 1.68607 s, 6.2 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 1.82204 s, 5.8 MB/s <br clear="none">> ><br clear="none">> > $ for i in {1..5}; do { dd if=/dev/zero of=/mnt/glusterfs2/test.tmp bs=1M count=10 oflag=sync; rm -f /mnt/glusterfs2/test.tmp; } done <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 1.15739 s, 9.1 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 0.978528 s, 10.7 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 0.910642 s, 11.5 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 0.998249 s, 10.5 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 1.03377 s, 10.1 MB/s <br clear="none">> ><br clear="none">> > The distributed one shows a bit better performance than the <br clear="none">> > distributed-replicated one, but it's still poor. :-( <br clear="none">> ><br clear="none">> > The disk storage itself is OK, here's what I see on each of 4 GlusterFS <br clear="none">> > servers: <br clear="none">> > for i in {1..5}; do { dd if=/dev/zero of=/mnt/storage1/test.tmp bs=1M count=10 oflag=sync; rm -f /mnt/storage1/test.tmp; } done <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 0.0656698 s, 160 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 0.0476927 s, 220 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 0.036526 s, 287 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 0.0329145 s, 319 MB/s <br clear="none">> > 10+0 records in <br clear="none">> > 10+0 records out <br clear="none">> > 10485760 bytes (10 MB) copied, 0.0403988 s, 260 MB/s <br clear="none">> ><br clear="none">> > The network between all 5 VMs is OK, they all are working on the same <br clear="none">> > physical host. <br clear="none">> ><br clear="none">> > Can't understand, what am I doing wrong. :-( <br clear="none">> ><br clear="none">> > Here's the detailed info about the volumes: <br clear="none">> > Volume Name: storage1 <br clear="none">> > Type: Distributed-Replicate <br clear="none">> > Volume ID: a42e2554-99e5-4331-bcc4-0900d002ae32 <br clear="none">> > Status: Started <br clear="none">> > Snapshot Count: 0 <br clear="none">> > Number of Bricks: 2 x (2 + 1) = 6 <br clear="none">> > Transport-type: tcp <br clear="none">> > Bricks: <br clear="none">> > Brick1: gluster1.k8s.maitre-d.tucha.ua:/mnt/storage1/brick1 <br clear="none">> > Brick2: gluster2.k8s.maitre-d.tucha.ua:/mnt/storage1/brick2 <br clear="none">> > Brick3: gluster3.k8s.maitre-d.tucha.ua:/mnt/storage1/brick_arbiter (arbiter) <br clear="none">> > Brick4: gluster3.k8s.maitre-d.tucha.ua:/mnt/storage1/brick3 <br clear="none">> > Brick5: gluster4.k8s.maitre-d.tucha.ua:/mnt/storage1/brick4 <br clear="none">> > Brick6: gluster1.k8s.maitre-d.tucha.ua:/mnt/storage1/brick_arbiter (arbiter) <br clear="none">> > Options Reconfigured: <br clear="none">> > transport.address-family: inet <br clear="none">> > nfs.disable: on <br clear="none">> > performance.client-io-threads: off <br clear="none">> ><br clear="none">> > Volume Name: storage2 <br clear="none">> > Type: Distribute <br clear="none">> > Volume ID: df4d8096-ad03-493e-9e0e-586ce21fb067 <br clear="none">> > Status: Started <br clear="none">> > Snapshot Count: 0 <br clear="none">> > Number of Bricks: 4 <br clear="none">> > Transport-type: tcp <br clear="none">> > Bricks: <br clear="none">> > Brick1: gluster1.k8s.maitre-d.tucha.ua:/mnt/storage2 <br clear="none">> > Brick2: gluster2.k8s.maitre-d.tucha.ua:/mnt/storage2 <br clear="none">> > Brick3: gluster3.k8s.maitre-d.tucha.ua:/mnt/storage2 <br clear="none">> > Brick4: gluster4.k8s.maitre-d.tucha.ua:/mnt/storage2 <br clear="none">> > Options Reconfigured: <br clear="none">> > transport.address-family: inet <br clear="none">> > nfs.disable: on <br clear="none">> ><br clear="none">> > The OS is CentOS Linux release 7.6.1810. The packages I'm using are: <br clear="none">> > glusterfs-6.3-1.el7.x86_64 <br clear="none">> > glusterfs-api-6.3-1.el7.x86_64 <br clear="none">> > glusterfs-cli-6.3-1.el7.x86_64 <br clear="none">> > glusterfs-client-xlators-6.3-1.el7.x86_64 <br clear="none">> > glusterfs-fuse-6.3-1.el7.x86_64 <br clear="none">> > glusterfs-libs-6.3-1.el7.x86_64 <br clear="none">> > glusterfs-server-6.3-1.el7.x86_64 <br clear="none">> > kernel-3.10.0-327.el7.x86_64 <br clear="none">> > kernel-3.10.0-514.2.2.el7.x86_64 <br clear="none">> > kernel-3.10.0-957.12.1.el7.x86_64 <br clear="none">> > kernel-3.10.0-957.12.2.el7.x86_64 <br clear="none">> > kernel-3.10.0-957.21.3.el7.x86_64 <br clear="none">> > kernel-tools-3.10.0-957.21.3.el7.x86_64 <br clear="none">> > kernel-tools-libs-3.10.0-957.21.3.el7.x86_6 <br clear="none">> ><br clear="none">> > Please, be so kind as to help me to understand, did I do it wrong or <br clear="none">> > that's quite normal performance of GlusterFS? <br clear="none">> ><br clear="none">> > Thanks in advance! <br clear="none">> > _______________________________________________ <br clear="none">> > Gluster-users mailing list <br clear="none">> > <a shape="rect" href="mailto:Gluster-users@gluster.org" rel="nofollow" target="_blank">Gluster-users@gluster.org</a> <br clear="none">> > <a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users " rel="nofollow" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users </a></div><br clear="none"><br clear="none">-- <br clear="none">V.Melnik<div class="ydp74d63e68yqt9133413676" id="ydp74d63e68yqtfd71719"><br clear="none"></div></div>
</div>
</div></body></html>