[Bugs] [Bug 1673058] Network throughput usage increased x5

bugzilla at redhat.com bugzilla at redhat.com
Fri Apr 26 13:49:02 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1673058



--- Comment #32 from Alberto Bengoa <bengoa at gmail.com> ---
Hello Poornima,

I did some tests today and, in my scenario, it seems fixed.

What I did this time:

- Mounted the new cluster (running 5.6 version) using a client running version
5.5
- Started a find . -type d on a directory with lots of directories.
- It generated an outgoing traffic (on the client) of around 40mbps [1]

Then I upgraded the client to version 5.6 and re-run the tests, and had around
800kbps network traffic[2]. Really good!

I've made a couple of tests more, enabling quick read[3][4]. It may have
slightly increased my network traffic, but nothing really significant.


[1] - https://pasteboard.co/IbVwWTP.png
[2] - https://pasteboard.co/IbVxgVU.png
[3] - https://pasteboard.co/IbVxuaJ.png
[4] - https://pasteboard.co/IbVxCbZ.png

This is my current volume info:

Volume Name: volume
Type: Replicate
Volume ID: 1d8f7d2d-bda6-4f1c-aa10-6ad29e0b7f5e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: fs02tmp:/var/data/glusterfs/volume/brick
Brick2: fs01tmp:/var/data/glusterfs/volume/brick
Options Reconfigured:
network.ping-timeout: 10
performance.flush-behind: on
performance.write-behind-window-size: 16MB
performance.cache-size: 1900MB
performance.io-thread-count: 32
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
server.allow-insecure: on
server.event-threads: 4
client.event-threads: 4
performance.readdir-ahead: off
performance.read-ahead: off
performance.open-behind: on
performance.write-behind: off
performance.stat-prefetch: off
performance.quick-read: off
performance.strict-o-direct: on
performance.io-cache: off
performance.read-after-open: yes
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 200000


Let me know if you need anything else. 

Cheers,

Alberto Bengoa

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the Bugs mailing list