[Gluster-users] increased latency causes rapid decrease of ftp transfer from/to glusterfs filesystem

Strahil hunter86_bg at yahoo.com
Wed Jul 24 07:24:10 UTC 2019


Hi Peter,

Can you elaborate your issue, as I can't completely understand it.

So, you have poor client performance when the lattency between client/gluster server increases almost 20  times  (from 0.5 ms  to 10 ms) ,  right ?

If I understood you corectly - this type of issue cannot be avoided, as the network lattency is decreasing the communication between client/server  and server/server.
The same behavior will be observed in SAN , so you nees to have persistent network performance.
Of course, there are some client caching techniques that will reduce that impact - but they increase the risk for your data consistency.


Best Regards,
Strahil NikolovOn Jul 24, 2019 08:38, peter knezel <peter.knezel at gmail.com> wrote:
>
> Hello all,
>
> having installed a 2 glusterfs system on debian 9.x
> and using glusterfs packages 5.6-1 we tried to transfer via ftp files from/to
> glusterfs filesystem.
>
> While ftp download is around 7.5MB/s, after increasing latency to 10ms (see tc command below) download is decreased rapidly to cca 1.3MB/s.
>
>
> # ping xx.xx.xx.xx
> 64 bytes from xx.xx.xx.xx: icmp_seq=1 ttl=64 time=0.426 ms
> 64 bytes from xx.xx.xx.xx: icmp_seq=2 ttl=64 time=0.443 ms
> 64 bytes from xx.xx.xx.xx: icmp_seq=3 ttl=64 time=0.312 ms
> 64 bytes from xx.xx.xx.xx: icmp_seq=4 ttl=64 time=0.373 ms
> 64 bytes from xx.xx.xx.xx: icmp_seq=5 ttl=64 time=0.415 ms
> ^C
> --- xx.xx.xx.xx ping statistics ---
> 5 packets transmitted, 5 received, 0% packet loss, time 4100ms
> rtt min/avg/max/mdev = 0.312/0.393/0.443/0.053 ms
>
> # tc qdisc add dev eth0 root netem delay 10ms
> # ping xx.xx.xx.xx
> PING xx.xx.xx.xx (xx.xx.xx.xx) 56(84) bytes of data.
> 64 bytes from xx.xx.xx.xx: icmp_seq=1 ttl=64 time=10.3 ms
> 64 bytes from xx.xx.xx.xx: icmp_seq=2 ttl=64 time=10.3 ms
> 64 bytes from xx.xx.xx.xx: icmp_seq=3 ttl=64 time=10.3 ms
> 64 bytes from xx.xx.xx.xx: icmp_seq=4 ttl=64 time=10.3 ms
> 64 bytes from xx.xx.xx.xx: icmp_seq=5 ttl=64 time=10.4 ms
> 64 bytes from xx.xx.xx.xx: icmp_seq=6 ttl=64 time=10.4 ms
> ^C
> --- xx.xx.xx.xx ping statistics ---
> 6 packets transmitted, 6 received, 0% packet loss, time 5007ms
> rtt min/avg/max/mdev = 10.304/10.387/10.492/0.138 ms
>
> root at server1:~# gluster vol list
> GVOLUME
> root at server1:~# gluster vol info
>
> Volume Name: GVOLUME
> Type: Replicate
> Volume ID: xxx
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: server1.lab:/srv/fs/ftp/brick
> Brick2: server2.lab:/srv/fs/ftp/brick
> Options Reconfigured:
> performance.client-io-threads: off
> nfs.disable: off
> transport.address-family: inet
> features.cache-invalidation: on
> performance.stat-prefetch: on
> performance.md-cache-timeout: 60
> network.inode-lru-limit: 1048576
> cluster.quorum-type: auto
> performance.cache-max-file-size: 512KB
> performance.cache-size: 1GB
> performance.flush-behind: on
> performance.nfs.flush-behind: on
> performance.write-behind-window-size: 512KB
> performance.nfs.write-behind-window-size: 512KB
> performance.strict-o-direct: off
> performance.nfs.strict-o-direct: off
> performance.read-after-open: on
> performance.io-thread-count: 32
> client.event-threads: 4
> server.event-threads: 4
> performance.write-behind: on
> performance.read-ahead: on
> performance.readdir-ahead: on
> nfs.export-dirs: off
> nfs.addr-namelookup: off
> nfs.rdirplus: on
> features.barrier-timeout: 1
> features.trash: off
> cluster.quorum-reads: true
> auth.allow: 127.0.0.1,xx.xx.xx.xx,xx.xx.xx.yy
> auth.reject: all
> root at server1:~#
>
> Can somebody help me to tune/solve this issue?
> Thanks and kind regards,
>
> peterk
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190724/f8ec8b50/attachment.html>


More information about the Gluster-users mailing list