[Gluster-users] Poor performance compared to Netapp NAS with small files
Nicolas
nicolas at furyweb.fr
Mon Sep 24 06:05:00 UTC 2018
Many thanks Vlad but unfortunately these parameters did not improve any performance in my case.
I reinstalled 3 nodes gluster servers with CentOS 7.5 and got a small benefit (13s instead of 20s) but it's always far from NFS performance (4s)
Here is some measure in different conditions with a bigger git project (local disk, NAS, Gluster) :
# grep gluster /proc/mounts
glusterVMa:perftest /mnt fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0
# /usr/bin/time -f '%E' git clone git at gitlab:grp/dev.git /tmp/glusterbench
Clonage dans '/tmp/glusterbench'...
remote: Enumerating objects: 164200, done.
remote: Counting objects: 100% (164200/164200), done.
remote: Compressing objects: 100% (41320/41320), done.
remote: Total 164200 (delta 118469), reused 164177 (delta 118447)
Réception d'objets: 100% (164200/164200), 134.12 MiB | 24.83 MiB/s, fait.
Résolution des deltas: 100% (118469/118469), fait.
Vérification de la connectivité... fait.
0:22.92
# /usr/bin/time -f '%E' git clone git at gitlab:grp/dev.git /nas/glusterbench
Clonage dans '/nas/glusterbench'...
remote: Enumerating objects: 164200, done.
remote: Counting objects: 100% (164200/164200), done.
remote: Compressing objects: 100% (41320/41320), done.
remote: Total 164200 (delta 118469), reused 164177 (delta 118447)
Réception d'objets: 100% (164200/164200), 134.12 MiB | 12.52 MiB/s, fait.
Résolution des deltas: 100% (118469/118469), fait.
Vérification de la connectivité... fait.
Extraction des fichiers: 100% (10669/10669), fait.
2:01.50
# /usr/bin/time -f '%E' git clone git at gitlab:grp/dev.git /mnt/glusterbench
Clonage dans '/mnt/glusterbench'...
remote: Enumerating objects: 164200, done.
remote: Counting objects: 100% (164200/164200), done.
remote: Compressing objects: 100% (41320/41320), done.
remote: Total 164200 (delta 118469), reused 164177 (delta 118447)
Réception d'objets: 100% (164200/164200), 134.12 MiB | 1.86 MiB/s, fait.
Résolution des deltas: 100% (118469/118469), fait.
Vérification de la connectivité... fait.
Extraction des fichiers: 100% (10669/10669), fait.
8:33.67
>From gluster server point of view, almost no system activity during cloning :
# dstat
You did not select any stats, using -cdngy by default.
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read writ| recv send| in out | int csw
2 2 96 0 0 0|1876B 96k| 0 0 | 0 0 | 667 824
15 11 73 0 0 1| 0 0 | 863k 505k| 0 0 |4859 8034
13 9 77 0 0 1| 0 0 | 570k 352k| 0 0 |4398 7472
14 12 72 0 0 1| 0 0 | 547k 321k| 0 0 |4486 6945
10 12 76 0 0 1| 0 0 | 591k 333k| 0 0 |4672 7001
14 11 72 0 0 3| 0 0 |2584k 1159k| 0 0 |6886 8271
12 13 72 0 0 2| 16k 0 |2233k 911k| 0 0 |5745 8677
12 11 75 0 0 2| 0 0 |3527k 1172k| 0 0 |6092 8932
13 11 73 0 0 3| 0 1280k|4886k 1403k| 0 0 |6454 8602
11 9 78 0 0 1| 0 0 | 965k 576k| 0 0 |4244 6836
12 10 74 0 0 3| 0 0 |1729k 727k| 0 0 |5749 7347
15 11 73 0 0 2| 0 0 |3394k 798k| 0 0 |6456 9454
14 10 75 0 0 1| 0 0 |1892k 1132k| 0 0 |5092 7975
11 10 76 0 0 2| 0 0 |3136k 1242k| 0 0 |5452 7684
13 11 75 0 0 2| 0 0 |1622k 608k| 0 0 |4862 7634
12 13 74 0 0 1| 0 0 |2264k 900k| 0 0 |5521 8650
13 14 71 0 0 3| 0 0 |3523k 1957k| 0 0 |6603 9799
11 19 67 0 0 3| 0 32M|2044k 722k| 0 0 |5855 7329
11 11 77 0 0 2| 0 3175k|2167k 987k| 0 0 |5210 7902
13 13 72 0 0 2|2688k 19M|3309k 1169k| 0 0 |6780 9426
12 10 76 0 0 1| 0 2712k|1403k 601k| 0 0 |4663 7417
9 8 82 0 0 1| 0 0 | 990k 401k| 0 0 |3730 5845
13 12 74 0 0 2| 0 0 |1913k 643k| 0 0 |6178 8545
13 11 74 0 0 1| 0 0 |1866k 902k| 0 0 |5557 8511
11 11 77 0 0 1| 0 0 |1577k 636k| 0 0 |5095 8359
12 13 70 1 0 3|3192k 0 |5399k 2330k| 0 0 |7277 9736
13 12 72 0 0 2| 0 0 |2396k 790k| 0 0 |5475 9279
12 11 78 0 0 0| 0 0 | 756k 605k| 0 0 |4530 7486
14 11 72 0 0 2| 0 0 |1759k 915k| 0 0 |5698 9141
13 12 72 0 0 2| 0 0 |1136k 531k| 0 0 |5371 8993
14 11 73 0 0 1| 0 0 |1241k 512k| 0 0 |5130 8442
13 11 76 0 0 0| 0 0 | 552k 486k| 0 0 |4461 7269
13 9 76 0 0 1| 0 0 | 518k 345k| 0 0 |4590 6984
11 14 74 0 0 1| 0 0 | 446k 298k| 0 0 |4091 6109 ^C
# gluster volume info perftest
Volume Name: perftest
Type: Replicate
Volume ID: 8cc37903-50e2-4a5a-8306-16d58860b8d3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: glusterVM1:/bricks/perftest/brick1/data
Brick2: glusterVM2:/bricks/perftest/brick1/data
Brick3: glusterVM3:/bricks/perftest/brick1/data (arbiter)
Options Reconfigured:
cluster.choose-local: on
cluster.nufa: on
performance.cache-size: 2GB
performance.enable-least-priority: off
performance.cache-refresh-timeout: 10
network.inode-lru-limit: 90000
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
server.event-threads: 4
performance.md-cache-timeout: 600
performance.client-io-threads: on
client.event-threads: 4
cluster.lookup-optimize: on
performance.nl-cache-timeout: 600
performance.nl-cache: on
cluster.lookup-unhashed: auto
performance.strict-o-direct: on
cluster.read-hash-mode: 0
cluster.readdir-optimize: on
performance.rda-cache-limit: 128MB
cluster.shd-max-threads: 12
server.allow-insecure: on
performance.io-thread-count: 8
nfs.disable: on
----- Mail original -----
De: "Vlad Kopylov" <vladkopy at gmail.com>
À: "Nicolas" <nicolas at furyweb.fr>
Cc: "gluster-users" <gluster-users at gluster.org>
Envoyé: Dimanche 23 Septembre 2018 17:16:35
Objet: Re: [Gluster-users] Poor performance compared to Netapp NAS with small files
Forgot mount options for small files
defaults,_netdev,negative-timeout=10,attribute-timeout=30,fopen-keep-cache,direct-io-mode=enable,fetch-attempts=5
On Sat, Sep 22, 2018 at 10:14 PM, Vlad Kopylov < [ mailto:vladkopy at gmail.com | vladkopy at gmail.com ] > wrote:
Here is what I have for small files. I don't think you really need much for git
Options Reconfigured:
performance.io-thread-count: 8
server.allow-insecure: on
cluster.shd-max-threads: 12
performance.rda-cache-limit: 128MB
cluster.readdir-optimize: on
cluster.read-hash-mode: 0
performance.strict-o-direct: on
cluster.lookup-unhashed: auto
performance.nl-cache: on
performance.nl-cache-timeout: 600
cluster.lookup-optimize: on
client.event-threads: 4
performance.client-io-threads: on
performance.md-cache-timeout: 600
server.event-threads: 4
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
network.inode-lru-limit: 90000
performance.cache-refresh-timeout: 10
performance.enable-least-priority: off
performance.cache-size: 2GB
cluster.nufa: on
cluster.choose-local: on
On Tue, Sep 18, 2018 at 6:48 AM, Nicolas < [ mailto:nicolas at furyweb.fr | nicolas at furyweb.fr ] > wrote:
BQ_BEGIN
Hello,
I have very bad performance with glusterFS 3.12.14 with small files especially when working with git repositories.
Here is my configuration :
3 nodes gluster (VMware guest v13 on vSphere 6.5 hosted by Gen8 blades attached to 3PAR SSD RAID5 LUNs), gluster volume type replica 3 with arbiter, SSL enabled, NFS disabled, heartbeat IP between both main nodes.
Trusted storage pool on Debian 9 x64
Client on Debian 8 x64 with native gluster client
Network bandwith verified with iperf between client and each storage node (~900Mb/s)
Disk bandwith verified with dd on each storage node (~90MB/s)
_____________________________________________________________
Volume Name: perftest
Type: Replicate
Volume ID: c60b3744-7955-4058-b276-69d7b97de8aa
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: glusterVM1:/bricks/perftest/brick1/data
Brick2: glusterVM2:/bricks/perftest/brick1/data
Brick3: glusterVM3:/bricks/perftest/brick1/data (arbiter)
Options Reconfigured:
cluster.data-self-heal-algorithm: full
features.trash: off
diagnostics.client-log-level: ERROR
ssl.cipher-list: HIGH:!SSLv2
server.ssl: on
client.ssl: on
transport.address-family: inet
nfs.disable: on
_____________________________________________________________
I made a test script that try several parameters but every test gives similar measures (except for performance.write-behind), ~30s average for a git clone that take only 3s on NAS volume.
_____________________________________________________________
#!/bin/bash
trap "[ -d /mnt/project ] && rm -rf /mnt/project; grep -q /mnt /proc/mounts && umount /mnt; exit" 2
LOG=$(mktemp)
for params in \
"server.event-threads 5" \
"client.event-threads 5" \
"cluster.lookup-optimize on" \
"cluster.readdir-optimize on" \
"features.cache-invalidation on" \
"features.cache-invalidation-timeout 5" \
"performance.cache-invalidation on" \
"performance.cache-refresh-timeout 5" \
"performance.client-io-threads on" \
"performance.flush-behind on" \
"performance.io-thread-count 6" \
"performance.quick-read on" \
"performance.read-ahead enable" \
"performance.readdir-ahead enable" \
"performance.stat-prefetch on" \
"performance.write-behind on" \
"performance.write-behind-window-size 2MB"; do
set $params
echo -n "gluster volume set perftest $1 $2 -> "
ssh -n glusterVM3 "gluster volume set perftest $1 $2"
done
echo "NAS Reference"
sh -c "time -o $LOG -f '%E %P' git clone git at gitlab.local:grp/project.git /share/nas >/dev/null 2>&1"
cat $LOG
rm -rf /share/nas/project
for params in \
"server.event-threads 5 6 7" \
"client.event-threads 5 6 7" \
"cluster.lookup-optimize on off on" \
"cluster.readdir-optimize on off on" \
"features.cache-invalidation on off on" \
"features.cache-invalidation-timeout 5 10 15 20 30 45 60 90 120" \
"performance.cache-invalidation on off on" \
"performance.cache-refresh-timeout 1 5 10 15 20 30 45 60" \
"performance.client-io-threads on off on" \
"performance.flush-behind on off on" \
"performance.io-thread-count 6 7 8 9 10" \
"performance.quick-read on off on" \
"performance.read-ahead enable disable enable" \
"performance.readdir-ahead enable disable enable" \
"performance.stat-prefetch on off on" \
"performance.write-behind on off on" \
"performance.write-behind-window-size 2MB 4MB 8MB 16MB"; do
set $params
param=$1
shift
for value in $*; do
echo -en "\nTesting $param=$value -> "
#ssh -n glusterVM3 "yes | gluster volume stop perftest force; gluster volume set perftest $param $value; gluster volume start perftest"
ssh -n glusterVM3 "gluster volume set perftest $param $value"
if mount -t glusterfs -o defaults,direct-io-mode=enable glusterVMa:perftest /mnt; then
for i in $(seq 1 5); do
sh -c "time -o $LOG -f '%E %P' git clone git at gitlab.local:grp/project.git /mnt/bench >/dev/null 2>&1"
cat $LOG
rm -rf /mnt/bench
done
umount /mnt
else
echo "*** FAIL"
exit
fi
done
done
rm $LOG
_____________________________________________________________
Output produced by the script
_____________________________________________________________
gluster volume set perftest server.event-threads 5 -> volume set: success
gluster volume set perftest client.event-threads 5 -> volume set: success
gluster volume set perftest cluster.lookup-optimize on -> volume set: success
gluster volume set perftest cluster.readdir-optimize on -> volume set: success
gluster volume set perftest features.cache-invalidation on -> volume set: success
gluster volume set perftest features.cache-invalidation-timeout 5 -> volume set: success
gluster volume set perftest performance.cache-invalidation on -> volume set: success
gluster volume set perftest performance.cache-refresh-timeout 5 -> volume set: success
gluster volume set perftest performance.client-io-threads on -> volume set: success
gluster volume set perftest performance.flush-behind on -> volume set: success
gluster volume set perftest performance.io-thread-count 6 -> volume set: success
gluster volume set perftest performance.quick-read on -> volume set: success
gluster volume set perftest performance.read-ahead enable -> volume set: success
gluster volume set perftest performance.readdir-ahead enable -> volume set: success
gluster volume set perftest performance.stat-prefetch on -> volume set: success
gluster volume set perftest performance.write-behind on -> volume set: success
gluster volume set perftest performance.write-behind-window-size 2MB -> volume set: success
NAS Reference
0:03.59 23%
Testing server.event-threads=5 -> volume set: success
0:29.45 2%
0:27.07 2%
0:24.89 2%
0:24.93 2%
0:24.64 3%
Testing server.event-threads=6 -> volume set: success
0:24.14 3%
0:24.69 2%
0:26.81 2%
0:27.38 2%
0:25.59 2%
Testing server.event-threads=7 -> volume set: success
0:25.34 2%
0:24.14 2%
0:25.92 2%
0:23.62 2%
0:24.76 2%
Testing client.event-threads=5 -> volume set: success
0:24.60 3%
0:29.40 2%
0:34.78 2%
0:33.99 2%
0:33.54 2%
Testing client.event-threads=6 -> volume set: success
0:23.82 3%
0:24.64 2%
0:26.10 3%
0:24.56 2%
0:28.21 2%
Testing client.event-threads=7 -> volume set: success
0:28.15 2%
0:35.19 2%
0:24.03 2%
0:24.79 2%
0:26.55 2%
Testing cluster.lookup-optimize=on -> volume set: success
0:30.67 2%
0:30.49 2%
0:31.52 2%
0:33.13 2%
0:32.41 2%
Testing cluster.lookup-optimize=off -> volume set: success
0:25.82 2%
0:25.59 2%
0:28.24 2%
0:31.90 2%
0:33.52 2%
Testing cluster.lookup-optimize=on -> volume set: success
0:29.33 2%
0:24.82 2%
0:25.93 2%
0:25.36 2%
0:24.89 2%
Testing cluster.readdir-optimize=on -> volume set: success
0:24.98 2%
0:25.03 2%
0:27.47 2%
0:28.13 2%
0:27.41 2%
Testing cluster.readdir-optimize=off -> volume set: success
0:32.54 2%
0:32.50 2%
0:25.56 2%
0:25.21 2%
0:27.39 2%
Testing cluster.readdir-optimize=on -> volume set: success
0:27.68 2%
0:29.33 2%
0:25.50 2%
0:25.17 2%
0:26.00 2%
Testing features.cache-invalidation=on -> volume set: success
0:25.63 2%
0:25.46 3%
0:25.55 3%
0:26.13 2%
0:25.13 2%
Testing features.cache-invalidation=off -> volume set: success
0:27.79 2%
0:25.31 2%
0:24.75 2%
0:27.75 2%
0:32.67 2%
Testing features.cache-invalidation=on -> volume set: success
0:26.34 2%
0:26.60 2%
0:26.32 2%
0:31.05 3%
0:33.58 2%
Testing features.cache-invalidation-timeout=5 -> volume set: success
0:25.89 3%
0:25.07 3%
0:25.49 2%
0:25.44 3%
0:25.47 2%
Testing features.cache-invalidation-timeout=10 -> volume set: success
0:32.34 2%
0:28.27 3%
0:27.41 2%
0:25.17 2%
0:25.56 2%
Testing features.cache-invalidation-timeout=15 -> volume set: success
0:27.79 2%
0:30.58 2%
0:31.63 2%
0:26.71 2%
0:29.69 2%
Testing features.cache-invalidation-timeout=20 -> volume set: success
0:26.62 2%
0:23.76 3%
0:24.17 3%
0:24.99 2%
0:25.31 2%
Testing features.cache-invalidation-timeout=30 -> volume set: success
0:25.75 3%
0:27.34 2%
0:28.38 2%
0:27.15 2%
0:30.91 2%
Testing features.cache-invalidation-timeout=45 -> volume set: success
0:24.77 2%
0:24.81 2%
0:28.22 2%
0:32.56 2%
0:40.81 1%
Testing features.cache-invalidation-timeout=60 -> volume set: success
0:31.97 2%
0:27.14 2%
0:24.53 3%
0:25.48 3%
0:25.27 3%
Testing features.cache-invalidation-timeout=90 -> volume set: success
0:25.24 3%
0:26.83 3%
0:32.74 2%
0:26.82 3%
0:27.69 2%
Testing features.cache-invalidation-timeout=120 -> volume set: success
0:24.50 3%
0:25.43 3%
0:26.21 3%
0:30.09 2%
0:32.24 2%
Testing performance.cache-invalidation=on -> volume set: success
0:28.77 3%
0:37.16 2%
0:42.56 1%
0:26.21 2%
0:27.91 3%
Testing performance.cache-invalidation=off -> volume set: success
0:31.05 2%
0:34.40 2%
0:33.90 2%
0:33.12 2%
0:27.84 3%
Testing performance.cache-invalidation=on -> volume set: success
0:27.17 3%
0:26.73 3%
0:24.61 3%
0:26.36 3%
0:39.90 2%
Testing performance.cache-refresh-timeout=1 -> volume set: success
0:26.83 3%
0:36.17 2%
0:31.37 2%
0:26.12 3%
0:26.46 2%
Testing performance.cache-refresh-timeout=5 -> volume set: success
0:24.95 3%
0:27.33 3%
0:30.77 2%
0:26.77 3%
0:34.62 2%
Testing performance.cache-refresh-timeout=10 -> volume set: success
0:29.36 2%
0:26.04 3%
0:26.21 3%
0:29.47 3%
0:28.67 3%
Testing performance.cache-refresh-timeout=15 -> volume set: success
0:29.26 3%
0:27.31 3%
0:27.15 3%
0:29.74 3%
0:32.70 2%
Testing performance.cache-refresh-timeout=20 -> volume set: success
0:27.99 3%
0:30.13 2%
0:29.39 3%
0:28.59 3%
0:31.30 3%
Testing performance.cache-refresh-timeout=30 -> volume set: success
0:27.47 3%
0:26.68 3%
0:27.09 3%
0:27.08 3%
0:31.72 3%
Testing performance.cache-refresh-timeout=45 -> volume set: success
0:28.83 3%
0:29.21 3%
0:38.75 2%
0:26.15 3%
0:26.76 3%
Testing performance.cache-refresh-timeout=60 -> volume set: success
0:29.64 2%
0:29.71 2%
0:31.41 2%
0:28.35 3%
0:26.26 3%
Testing performance.client-io-threads=on -> volume set: success
0:25.14 3%
0:26.64 3%
0:26.43 3%
0:25.63 3%
0:27.89 3%
Testing performance.client-io-threads=off -> volume set: success
0:31.37 2%
0:33.65 2%
0:28.85 3%
0:28.27 3%
0:26.90 3%
Testing performance.client-io-threads=on -> volume set: success
0:26.12 3%
0:25.92 3%
0:28.30 3%
0:39.20 2%
0:28.45 3%
Testing performance.flush-behind=on -> volume set: success
0:34.83 2%
0:27.33 3%
0:31.30 2%
0:26.40 3%
0:27.49 2%
Testing performance.flush-behind=off -> volume set: success
0:30.64 2%
0:31.60 2%
0:33.22 2%
0:25.67 2%
0:26.85 3%
Testing performance.flush-behind=on -> volume set: success
0:26.75 3%
0:26.67 3%
0:30.52 3%
0:38.60 2%
0:34.69 3%
Testing performance.io-thread-count=6 -> volume set: success
0:30.87 2%
0:34.27 2%
0:34.08 2%
0:28.70 2%
0:32.83 2%
Testing performance.io-thread-count=7 -> volume set: success
0:32.14 2%
0:43.08 1%
0:31.79 2%
0:25.93 3%
0:26.82 2%
Testing performance.io-thread-count=8 -> volume set: success
0:29.89 2%
0:28.69 2%
0:34.19 2%
0:40.00 1%
0:37.42 2%
Testing performance.io-thread-count=9 -> volume set: success
0:26.50 3%
0:26.99 2%
0:27.05 2%
0:32.22 2%
0:31.63 2%
Testing performance.io-thread-count=10 -> volume set: success
0:29.13 2%
0:30.60 2%
0:25.19 2%
0:24.28 3%
0:25.40 3%
Testing performance.quick-read=on -> volume set: success
0:26.40 3%
0:27.37 2%
0:28.03 2%
0:28.07 2%
0:33.47 2%
Testing performance.quick-read=off -> volume set: success
0:30.99 2%
0:27.16 2%
0:25.34 3%
0:27.58 3%
0:27.67 3%
Testing performance.quick-read=on -> volume set: success
0:27.37 2%
0:26.99 3%
0:29.78 2%
0:26.06 2%
0:25.67 2%
Testing performance.read-ahead=enable -> volume set: success
0:24.52 3%
0:26.05 2%
0:32.37 2%
0:30.27 2%
0:25.70 3%
Testing performance.read-ahead=disable -> volume set: success
0:26.98 3%
0:25.54 3%
0:25.55 3%
0:30.78 2%
0:28.07 2%
Testing performance.read-ahead=enable -> volume set: success
0:30.34 2%
0:33.93 2%
0:30.26 2%
0:28.18 2%
0:27.06 3%
Testing performance.readdir-ahead=enable -> volume set: success
0:26.31 3%
0:25.64 3%
0:31.97 2%
0:30.75 2%
0:26.10 3%
Testing performance.readdir-ahead=disable -> volume set: success
0:27.50 3%
0:27.19 3%
0:27.67 3%
0:26.99 3%
0:28.25 3%
Testing performance.readdir-ahead=enable -> volume set: success
0:34.94 2%
0:30.43 2%
0:27.14 3%
0:27.81 2%
0:26.36 3%
Testing performance.stat-prefetch=on -> volume set: success
0:28.55 3%
0:27.10 2%
0:26.64 3%
0:30.84 3%
0:35.45 2%
Testing performance.stat-prefetch=off -> volume set: success
0:29.12 3%
0:36.54 2%
0:26.32 3%
0:29.02 3%
0:27.16 3%
Testing performance.stat-prefetch=on -> volume set: success
0:31.17 2%
0:34.64 2%
0:26.50 3%
0:30.39 2%
0:27.12 3%
Testing performance.write-behind=on -> volume set: success
0:29.77 2%
0:28.00 2%
0:28.98 3%
0:29.83 3%
0:28.87 3%
Testing performance.write-behind=off -> volume set: success
1:11.95 1%
1:06.03 1%
1:07.70 1%
1:30.21 1%
1:08.47 1%
Testing performance.write-behind=on -> volume set: success
0:30.14 2%
0:28.99 2%
0:34.51 2%
0:32.60 2%
0:30.54 2%
Testing performance.write-behind-window-size=2MB -> volume set: success
0:24.74 3%
0:25.71 2%
0:27.49 2%
0:25.78 3%
0:26.35 3%
Testing performance.write-behind-window-size=4MB -> volume set: success
0:34.21 2%
0:27.31 3%
0:28.83 2%
0:28.91 2%
0:25.73 3%
Testing performance.write-behind-window-size=8MB -> volume set: success
0:24.41 3%
0:26.23 2%
0:25.20 3%
0:26.00 2%
0:27.04 2%
Testing performance.write-behind-window-size=16MB -> volume set: success
0:27.92 2%
0:24.69 2%
0:24.67 2%
0:24.13 2%
0:23.55 3%
_____________________________________________________________
If someone has an idea to significantly improve performance I'll be very interested.
_______________________________________________
Gluster-users mailing list
[ mailto:Gluster-users at gluster.org | Gluster-users at gluster.org ]
[ https://lists.gluster.org/mailman/listinfo/gluster-users | https://lists.gluster.org/mailman/listinfo/gluster-users ]
BQ_END
More information about the Gluster-users
mailing list