<div dir="ltr">Hello all,<br><br>having installed a 2 glusterfs system on debian 9.x<br>and using glusterfs packages 5.6-1 we tried to transfer via ftp files from/to<br>glusterfs filesystem.<br><br>While ftp download is around 7.5MB/s, after increasing latency to 10ms (see tc command below) download is decreased rapidly to cca 1.3MB/s.<br><br><br># ping xx.xx.xx.xx<br>64 bytes from xx.xx.xx.xx: icmp_seq=1 ttl=64 time=0.426 ms<br>64 bytes from xx.xx.xx.xx: icmp_seq=2 ttl=64 time=0.443 ms<br>64 bytes from xx.xx.xx.xx: icmp_seq=3 ttl=64 time=0.312 ms<br>64 bytes from xx.xx.xx.xx: icmp_seq=4 ttl=64 time=0.373 ms<br>64 bytes from xx.xx.xx.xx: icmp_seq=5 ttl=64 time=0.415 ms<br>^C<br>--- xx.xx.xx.xx ping statistics ---<br>5 packets transmitted, 5 received, 0% packet loss, time 4100ms<br>rtt min/avg/max/mdev = 0.312/0.393/0.443/0.053 ms<br><br># tc qdisc add dev eth0 root netem delay 10ms<br># ping xx.xx.xx.xx<br>PING xx.xx.xx.xx (xx.xx.xx.xx) 56(84) bytes of data.<br>64 bytes from xx.xx.xx.xx: icmp_seq=1 ttl=64 time=10.3 ms<br>64 bytes from xx.xx.xx.xx: icmp_seq=2 ttl=64 time=10.3 ms<br>64 bytes from xx.xx.xx.xx: icmp_seq=3 ttl=64 time=10.3 ms<br>64 bytes from xx.xx.xx.xx: icmp_seq=4 ttl=64 time=10.3 ms<br>64 bytes from xx.xx.xx.xx: icmp_seq=5 ttl=64 time=10.4 ms<br>64 bytes from xx.xx.xx.xx: icmp_seq=6 ttl=64 time=10.4 ms<br>^C<br>--- xx.xx.xx.xx ping statistics ---<br>6 packets transmitted, 6 received, 0% packet loss, time 5007ms<br>rtt min/avg/max/mdev = 10.304/10.387/10.492/0.138 ms<br><br>root@server1:~# gluster vol list<br>GVOLUME<br>root@server1:~# gluster vol info<br><br>Volume Name: GVOLUME<br>Type: Replicate<br>Volume ID: xxx<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x 2 = 2<br>Transport-type: tcp<br>Bricks:<br>Brick1: server1.lab:/srv/fs/ftp/brick<br>Brick2: server2.lab:/srv/fs/ftp/brick<br>Options Reconfigured:<br>performance.client-io-threads: off<br>nfs.disable: off<br>transport.address-family: inet<br>features.cache-invalidation: on<br>performance.stat-prefetch: on<br>performance.md-cache-timeout: 60<br>network.inode-lru-limit: 1048576<br>cluster.quorum-type: auto<br>performance.cache-max-file-size: 512KB<br>performance.cache-size: 1GB<br>performance.flush-behind: on<br>performance.nfs.flush-behind: on<br>performance.write-behind-window-size: 512KB<br>performance.nfs.write-behind-window-size: 512KB<br>performance.strict-o-direct: off<br>performance.nfs.strict-o-direct: off<br>performance.read-after-open: on<br>performance.io-thread-count: 32<br>client.event-threads: 4<br>server.event-threads: 4<br>performance.write-behind: on<br>performance.read-ahead: on<br>performance.readdir-ahead: on<br>nfs.export-dirs: off<br>nfs.addr-namelookup: off<br>nfs.rdirplus: on<br>features.barrier-timeout: 1<br>features.trash: off<br>cluster.quorum-reads: true<br>auth.allow: 127.0.0.1,xx.xx.xx.xx,xx.xx.xx.yy<br>auth.reject: all<br>root@server1:~#<br><br>Can somebody help me to tune/solve this issue?<br>Thanks and kind regards,<br><br>peterk<br><br></div>