<div dir="ltr"><div dir="ltr"><div>Here is what I have for small files. I don't think you really need much for git<br></div><div><br></div><div>Options Reconfigured:<br>performance.io-thread-count: 8<br>server.allow-insecure: on<br>cluster.shd-max-threads: 12<br>performance.rda-cache-limit: 128MB<br>cluster.readdir-optimize: on<br>cluster.read-hash-mode: 0<br>performance.strict-o-direct: on<br>cluster.lookup-unhashed: auto<br>performance.nl-cache: on<br>performance.nl-cache-timeout: 600<br>cluster.lookup-optimize: on<br>client.event-threads: 4<br>performance.client-io-threads: on<br>performance.md-cache-timeout: 600<br>server.event-threads: 4<br>features.cache-invalidation: on<br>features.cache-invalidation-timeout: 600<br>performance.stat-prefetch: on<br>performance.cache-invalidation: on<br>network.inode-lru-limit: 90000<br>performance.cache-refresh-timeout: 10<br>performance.enable-least-priority: off<br>performance.cache-size: 2GB<br>cluster.nufa: on<br>cluster.choose-local: on<br><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 18, 2018 at 6:48 AM, Nicolas <span dir="ltr"><<a href="mailto:nicolas@furyweb.fr" target="_blank">nicolas@furyweb.fr</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:verdana,helvetica,sans-serif;font-size:10pt;color:#000000"><div>Hello,<br></div><div><br></div><div>I have very bad performance with glusterFS 3.12.14 with small files especially when working with git repositories.<br></div><div><br></div><div>Here is my configuration :<br></div><div>3 nodes gluster (VMware guest v13 on vSphere 6.5 hosted by Gen8 blades attached to 3PAR SSD RAID5 LUNs), gluster volume type replica 3 with arbiter, SSL enabled, NFS disabled, heartbeat IP between both main nodes.<br></div><div>Trusted storage pool on Debian 9 x64<br></div><div>Client on Debian 8 x64 with native gluster client<br></div><div>Network bandwith verified with iperf between client and each storage node (~900Mb/s)<br></div><div>Disk bandwith verified with dd on each storage node (~90MB/s)<br></div><div> <div>______________________________<wbr>______________________________<wbr>_</div>Volume Name: perftest<br>Type: Replicate<br>Volume ID: c60b3744-7955-4058-b276-<wbr>69d7b97de8aa<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (2 + 1) = 3<br>Transport-type: tcp<br>Bricks:<br>Brick1: glusterVM1:/bricks/perftest/<wbr>brick1/data<br>Brick2: glusterVM2:/bricks/perftest/<wbr>brick1/data<br>Brick3: glusterVM3:/bricks/perftest/<wbr>brick1/data (arbiter)<br>Options Reconfigured:<br>cluster.data-self-heal-<wbr>algorithm: full<br>features.trash: off<br>diagnostics.client-log-level: ERROR<br>ssl.cipher-list: HIGH:!SSLv2<br>server.ssl: on<br>client.ssl: on<br>transport.address-family: inet<br>nfs.disable: on<br>______________________________<wbr>______________________________<wbr>_<br></div><div><br></div><div>I made a test script that try several parameters but every test gives similar measures (except for performance.write-behind), ~30s average for a git clone that take only 3s on NAS volume.<br></div><div> <div>______________________________<wbr>______________________________<wbr>_</div> #!/bin/bash<br><br>trap "[ -d /mnt/project ] && rm -rf /mnt/project; grep -q /mnt /proc/mounts && umount /mnt; exit" 2<br><br>LOG=$(mktemp)<br>for params in \</div><div> "server.event-threads 5" \</div><div>"client.event-threads 5" \<div>"cluster.lookup-optimize on" \</div><div>"cluster.readdir-optimize on" \</div><div>"features.cache-invalidation on" \</div><div>"features.cache-invalidation-<wbr>timeout 5" \</div><div>"performance.cache-<wbr>invalidation on" \</div><div>"performance.cache-refresh-<wbr>timeout 5" \</div><div>"performance.client-io-threads on" \</div><div>"performance.flush-behind on" \</div><div>"performance.io-thread-count 6" \</div><div>"performance.quick-read on" \</div><div>"performance.read-ahead enable" \</div><div>"performance.readdir-ahead enable" \</div><div>"performance.stat-prefetch on" \</div><div>"performance.write-behind on" \</div><div>"performance.write-behind-<wbr>window-size 2MB"; do<br></div> set $params<br> echo -n "gluster volume set perftest $1 $2 -> "<br> ssh -n glusterVM3 "gluster volume set perftest $1 $2"<br>done<br>echo "NAS Reference"<br>sh -c "time -o $LOG -f '%E %P' git clone git@gitlab.local:grp/project.<wbr>git /share/nas >/dev/null 2>&1"<br>cat $LOG<br>rm -rf /share/nas/project<br><br>for params in \<br> "server.event-threads 5 6 7" \<br> "client.event-threads 5 6 7" \<br> "cluster.lookup-optimize on off on" \<br> "cluster.readdir-optimize on off on" \<br> "features.cache-invalidation on off on" \<br> "features.cache-invalidation-<wbr>timeout 5 10 15 20 30 45 60 90 120" \<br> "performance.cache-<wbr>invalidation on off on" \<br> "performance.cache-refresh-<wbr>timeout 1 5 10 15 20 30 45 60" \<br> "performance.client-io-threads on off on" \<br> "performance.flush-behind on off on" \<br> "performance.io-thread-count 6 7 8 9 10" \<br> "performance.quick-read on off on" \<br> "performance.read-ahead enable disable enable" \<br> "performance.readdir-ahead enable disable enable" \<br> "performance.stat-prefetch on off on" \<br> "performance.write-behind on off on" \<br> "performance.write-behind-<wbr>window-size 2MB 4MB 8MB 16MB"; do<br> set $params<br> param=$1<br> shift<br> for value in $*; do<br> echo -en "\nTesting $param=$value -> "<br> #ssh -n glusterVM3 "yes | gluster volume stop perftest force; gluster volume set perftest $param $value; gluster volume start perftest"<br> ssh -n glusterVM3 "gluster volume set perftest $param $value"<br> if mount -t glusterfs -o defaults,direct-io-mode=enable glusterVMa:perftest /mnt; then<br> for i in $(seq 1 5); do<br> sh -c "time -o $LOG -f '%E %P' git clone git@gitlab.local:grp/project.<wbr>git /mnt/bench >/dev/null 2>&1"<br> cat $LOG<br> rm -rf /mnt/bench<br> done<br> umount /mnt<br> else<br> echo "*** FAIL"<br> exit<br> fi<br> done<br>done<br><br>rm $LOG<br></div><div> <div>______________________________<wbr>______________________________<wbr>_</div><div><br></div> </div><div>Output produced by the script <br></div><div> <div>______________________________<wbr>______________________________<wbr>_</div><div>gluster volume set perftest server.event-threads 5 -> volume set: success<br>gluster volume set perftest client.event-threads 5 -> volume set: success<br>gluster volume set perftest cluster.lookup-optimize on -> volume set: success<br>gluster volume set perftest cluster.readdir-optimize on -> volume set: success<br>gluster volume set perftest features.cache-invalidation on -> volume set: success<br>gluster volume set perftest features.cache-invalidation-<wbr>timeout 5 -> volume set: success<br>gluster volume set perftest performance.cache-invalidation on -> volume set: success<br>gluster volume set perftest performance.cache-refresh-<wbr>timeout 5 -> volume set: success<br>gluster volume set perftest performance.client-io-threads on -> volume set: success<br>gluster volume set perftest performance.flush-behind on -> volume set: success<br>gluster volume set perftest performance.io-thread-count 6 -> volume set: success<br>gluster volume set perftest performance.quick-read on -> volume set: success<br>gluster volume set perftest performance.read-ahead enable -> volume set: success<br>gluster volume set perftest performance.readdir-ahead enable -> volume set: success<br>gluster volume set perftest performance.stat-prefetch on -> volume set: success<br>gluster volume set perftest performance.write-behind on -> volume set: success<br>gluster volume set perftest performance.write-behind-<wbr>window-size 2MB -> volume set: success<br>NAS Reference<br>0:03.59 23%<br><br>Testing server.event-threads=5 -> volume set: success<br>0:29.45 2%<br>0:27.07 2%<br>0:24.89 2%<br>0:24.93 2%<br>0:24.64 3%<br><br>Testing server.event-threads=6 -> volume set: success<br>0:24.14 3%<br>0:24.69 2%<br>0:26.81 2%<br>0:27.38 2%<br>0:25.59 2%<br><br>Testing server.event-threads=7 -> volume set: success<br>0:25.34 2%<br>0:24.14 2%<br>0:25.92 2%<br>0:23.62 2%<br>0:24.76 2%<br><br>Testing client.event-threads=5 -> volume set: success<br>0:24.60 3%<br>0:29.40 2%<br>0:34.78 2%<br>0:33.99 2%<br>0:33.54 2%<br><br>Testing client.event-threads=6 -> volume set: success<br>0:23.82 3%<br>0:24.64 2%<br>0:26.10 3%<br>0:24.56 2%<br>0:28.21 2%<br><br>Testing client.event-threads=7 -> volume set: success<br>0:28.15 2%<br>0:35.19 2%<br>0:24.03 2%<br>0:24.79 2%<br>0:26.55 2%<br><br>Testing cluster.lookup-optimize=on -> volume set: success<br>0:30.67 2%<br>0:30.49 2%<br>0:31.52 2%<br>0:33.13 2%<br>0:32.41 2%<br><br>Testing cluster.lookup-optimize=off -> volume set: success<br>0:25.82 2%<br>0:25.59 2%<br>0:28.24 2%<br>0:31.90 2%<br>0:33.52 2%<br><br>Testing cluster.lookup-optimize=on -> volume set: success<br>0:29.33 2%<br>0:24.82 2%<br>0:25.93 2%<br>0:25.36 2%<br>0:24.89 2%<br><br>Testing cluster.readdir-optimize=on -> volume set: success<br>0:24.98 2%<br>0:25.03 2%<br>0:27.47 2%<br>0:28.13 2%<br>0:27.41 2%<br><br>Testing cluster.readdir-optimize=off -> volume set: success<br>0:32.54 2%<br>0:32.50 2%<br>0:25.56 2%<br>0:25.21 2%<br>0:27.39 2%<br><br>Testing cluster.readdir-optimize=on -> volume set: success<br>0:27.68 2%<br>0:29.33 2%<br>0:25.50 2%<br>0:25.17 2%<br>0:26.00 2%<br><br>Testing features.cache-invalidation=on -> volume set: success<br>0:25.63 2%<br>0:25.46 3%<br>0:25.55 3%<br>0:26.13 2%<br>0:25.13 2%<br><br>Testing features.cache-invalidation=<wbr>off -> volume set: success<br>0:27.79 2%<br>0:25.31 2%<br>0:24.75 2%<br>0:27.75 2%<br>0:32.67 2%<br><br>Testing features.cache-invalidation=on -> volume set: success<br>0:26.34 2%<br>0:26.60 2%<br>0:26.32 2%<br>0:31.05 3%<br>0:33.58 2%<br><br>Testing features.cache-invalidation-<wbr>timeout=5 -> volume set: success<br>0:25.89 3%<br>0:25.07 3%<br>0:25.49 2%<br>0:25.44 3%<br>0:25.47 2%<br><br>Testing features.cache-invalidation-<wbr>timeout=10 -> volume set: success<br>0:32.34 2%<br>0:28.27 3%<br>0:27.41 2%<br>0:25.17 2%<br>0:25.56 2%<br><br>Testing features.cache-invalidation-<wbr>timeout=15 -> volume set: success<br>0:27.79 2%<br>0:30.58 2%<br>0:31.63 2%<br>0:26.71 2%<br>0:29.69 2%<br><br>Testing features.cache-invalidation-<wbr>timeout=20 -> volume set: success<br>0:26.62 2%<br>0:23.76 3%<br>0:24.17 3%<br>0:24.99 2%<br>0:25.31 2%<br><br>Testing features.cache-invalidation-<wbr>timeout=30 -> volume set: success<br>0:25.75 3%<br>0:27.34 2%<br>0:28.38 2%<br>0:27.15 2%<br>0:30.91 2%<br><br>Testing features.cache-invalidation-<wbr>timeout=45 -> volume set: success<br>0:24.77 2%<br>0:24.81 2%<br>0:28.22 2%<br>0:32.56 2%<br>0:40.81 1%<br><br>Testing features.cache-invalidation-<wbr>timeout=60 -> volume set: success<br>0:31.97 2%<br>0:27.14 2%<br>0:24.53 3%<br>0:25.48 3%<br>0:25.27 3%<br><br>Testing features.cache-invalidation-<wbr>timeout=90 -> volume set: success<br>0:25.24 3%<br>0:26.83 3%<br>0:32.74 2%<br>0:26.82 3%<br>0:27.69 2%<br><br>Testing features.cache-invalidation-<wbr>timeout=120 -> volume set: success<br>0:24.50 3%<br>0:25.43 3%<br>0:26.21 3%<br>0:30.09 2%<br>0:32.24 2%<br><br>Testing performance.cache-<wbr>invalidation=on -> volume set: success<br>0:28.77 3%<br>0:37.16 2%<br>0:42.56 1%<br>0:26.21 2%<br>0:27.91 3%<br><br>Testing performance.cache-<wbr>invalidation=off -> volume set: success<br>0:31.05 2%<br>0:34.40 2%<br>0:33.90 2%<br>0:33.12 2%<br>0:27.84 3%<br><br>Testing performance.cache-<wbr>invalidation=on -> volume set: success<br>0:27.17 3%<br>0:26.73 3%<br>0:24.61 3%<br>0:26.36 3%<br>0:39.90 2%<br><br>Testing performance.cache-refresh-<wbr>timeout=1 -> volume set: success<br>0:26.83 3%<br>0:36.17 2%<br>0:31.37 2%<br>0:26.12 3%<br>0:26.46 2%<br><br>Testing performance.cache-refresh-<wbr>timeout=5 -> volume set: success<br>0:24.95 3%<br>0:27.33 3%<br>0:30.77 2%<br>0:26.77 3%<br>0:34.62 2%<br><br>Testing performance.cache-refresh-<wbr>timeout=10 -> volume set: success<br>0:29.36 2%<br>0:26.04 3%<br>0:26.21 3%<br>0:29.47 3%<br>0:28.67 3%<br><br>Testing performance.cache-refresh-<wbr>timeout=15 -> volume set: success<br>0:29.26 3%<br>0:27.31 3%<br>0:27.15 3%<br>0:29.74 3%<br>0:32.70 2%<br><br>Testing performance.cache-refresh-<wbr>timeout=20 -> volume set: success<br>0:27.99 3%<br>0:30.13 2%<br>0:29.39 3%<br>0:28.59 3%<br>0:31.30 3%<br><br>Testing performance.cache-refresh-<wbr>timeout=30 -> volume set: success<br>0:27.47 3%<br>0:26.68 3%<br>0:27.09 3%<br>0:27.08 3%<br>0:31.72 3%<br><br>Testing performance.cache-refresh-<wbr>timeout=45 -> volume set: success<br>0:28.83 3%<br>0:29.21 3%<br>0:38.75 2%<br>0:26.15 3%<br>0:26.76 3%<br><br>Testing performance.cache-refresh-<wbr>timeout=60 -> volume set: success<br>0:29.64 2%<br>0:29.71 2%<br>0:31.41 2%<br>0:28.35 3%<br>0:26.26 3%<br><br>Testing performance.client-io-threads=<wbr>on -> volume set: success<br>0:25.14 3%<br>0:26.64 3%<br>0:26.43 3%<br>0:25.63 3%<br>0:27.89 3%<br><br>Testing performance.client-io-threads=<wbr>off -> volume set: success<br>0:31.37 2%<br>0:33.65 2%<br>0:28.85 3%<br>0:28.27 3%<br>0:26.90 3%<br><br>Testing performance.client-io-threads=<wbr>on -> volume set: success<br>0:26.12 3%<br>0:25.92 3%<br>0:28.30 3%<br>0:39.20 2%<br>0:28.45 3%<br><br>Testing performance.flush-behind=on -> volume set: success<br>0:34.83 2%<br>0:27.33 3%<br>0:31.30 2%<br>0:26.40 3%<br>0:27.49 2%<br><br>Testing performance.flush-behind=off -> volume set: success<br>0:30.64 2%<br>0:31.60 2%<br>0:33.22 2%<br>0:25.67 2%<br>0:26.85 3%<br><br>Testing performance.flush-behind=on -> volume set: success<br>0:26.75 3%<br>0:26.67 3%<br>0:30.52 3%<br>0:38.60 2%<br>0:34.69 3%<br><br>Testing performance.io-thread-count=6 -> volume set: success<br>0:30.87 2%<br>0:34.27 2%<br>0:34.08 2%<br>0:28.70 2%<br>0:32.83 2%<br><br>Testing performance.io-thread-count=7 -> volume set: success<br>0:32.14 2%<br>0:43.08 1%<br>0:31.79 2%<br>0:25.93 3%<br>0:26.82 2%<br><br>Testing performance.io-thread-count=8 -> volume set: success<br>0:29.89 2%<br>0:28.69 2%<br>0:34.19 2%<br>0:40.00 1%<br>0:37.42 2%<br><br>Testing performance.io-thread-count=9 -> volume set: success<br>0:26.50 3%<br>0:26.99 2%<br>0:27.05 2%<br>0:32.22 2%<br>0:31.63 2%<br><br>Testing performance.io-thread-count=10 -> volume set: success<br>0:29.13 2%<br>0:30.60 2%<br>0:25.19 2%<br>0:24.28 3%<br>0:25.40 3%<br><br>Testing performance.quick-read=on -> volume set: success<br>0:26.40 3%<br>0:27.37 2%<br>0:28.03 2%<br>0:28.07 2%<br>0:33.47 2%<br><br>Testing performance.quick-read=off -> volume set: success<br>0:30.99 2%<br>0:27.16 2%<br>0:25.34 3%<br>0:27.58 3%<br>0:27.67 3%<br><br>Testing performance.quick-read=on -> volume set: success<br>0:27.37 2%<br>0:26.99 3%<br>0:29.78 2%<br>0:26.06 2%<br>0:25.67 2%<br><br>Testing performance.read-ahead=enable -> volume set: success<br>0:24.52 3%<br>0:26.05 2%<br>0:32.37 2%<br>0:30.27 2%<br>0:25.70 3%<br><br>Testing performance.read-ahead=disable -> volume set: success<br>0:26.98 3%<br>0:25.54 3%<br>0:25.55 3%<br>0:30.78 2%<br>0:28.07 2%<br><br>Testing performance.read-ahead=enable -> volume set: success<br>0:30.34 2%<br>0:33.93 2%<br>0:30.26 2%<br>0:28.18 2%<br>0:27.06 3%<br><br>Testing performance.readdir-ahead=<wbr>enable -> volume set: success<br>0:26.31 3%<br>0:25.64 3%<br>0:31.97 2%<br>0:30.75 2%<br>0:26.10 3%<br><br>Testing performance.readdir-ahead=<wbr>disable -> volume set: success<br>0:27.50 3%<br>0:27.19 3%<br>0:27.67 3%<br>0:26.99 3%<br>0:28.25 3%<br><br>Testing performance.readdir-ahead=<wbr>enable -> volume set: success<br>0:34.94 2%<br>0:30.43 2%<br>0:27.14 3%<br>0:27.81 2%<br>0:26.36 3%<br><br>Testing performance.stat-prefetch=on -> volume set: success<br>0:28.55 3%<br>0:27.10 2%<br>0:26.64 3%<br>0:30.84 3%<br>0:35.45 2%<br><br>Testing performance.stat-prefetch=off -> volume set: success<br>0:29.12 3%<br>0:36.54 2%<br>0:26.32 3%<br>0:29.02 3%<br>0:27.16 3%<br><br>Testing performance.stat-prefetch=on -> volume set: success<br>0:31.17 2%<br>0:34.64 2%<br>0:26.50 3%<br>0:30.39 2%<br>0:27.12 3%<br><br>Testing performance.write-behind=on -> volume set: success<br>0:29.77 2%<br>0:28.00 2%<br>0:28.98 3%<br>0:29.83 3%<br>0:28.87 3%<br><br>Testing performance.write-behind=off -> volume set: success<br>1:11.95 1%<br>1:06.03 1%<br>1:07.70 1%<br>1:30.21 1%<br>1:08.47 1%<br><br>Testing performance.write-behind=on -> volume set: success<br>0:30.14 2%<br>0:28.99 2%<br>0:34.51 2%<br>0:32.60 2%<br>0:30.54 2%<br><br>Testing performance.write-behind-<wbr>window-size=2MB -> volume set: success<br>0:24.74 3%<br>0:25.71 2%<br>0:27.49 2%<br>0:25.78 3%<br>0:26.35 3%<br><br>Testing performance.write-behind-<wbr>window-size=4MB -> volume set: success<br>0:34.21 2%<br>0:27.31 3%<br>0:28.83 2%<br>0:28.91 2%<br>0:25.73 3%<br><br>Testing performance.write-behind-<wbr>window-size=8MB -> volume set: success<br>0:24.41 3%<br>0:26.23 2%<br>0:25.20 3%<br>0:26.00 2%<br>0:27.04 2%<br><br>Testing performance.write-behind-<wbr>window-size=16MB -> volume set: success<br>0:27.92 2%<br>0:24.69 2%<br>0:24.67 2%<br>0:24.13 2%<br>0:23.55 3%<br> <div>______________________________<wbr>______________________________<wbr>_</div><div><br></div> If someone has an idea to significantly improve performance I'll be very interested.<br></div> </div></div></div><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>