[Gluster-users] Extremely low performance - am I doing somethingwrong?
Vladimir Melnik
v.melnik at tucha.ua
Wed Jul 3 21:16:40 UTC 2019
OK, I tweaked the virtualization parameters and now I have ~10 Gbit/s
between all the nodes.
$ iperf3 -c 10.13.1.16
Connecting to host 10.13.1.16, port 5201
[ 4] local 10.13.1.17 port 47242 connected to 10.13.1.16 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.42 GBytes 12.2 Gbits/sec 0 1.86 MBytes
[ 4] 1.00-2.00 sec 1.54 GBytes 13.3 Gbits/sec 0 2.53 MBytes
[ 4] 2.00-3.00 sec 1.37 GBytes 11.8 Gbits/sec 0 2.60 MBytes
[ 4] 3.00-4.00 sec 1.25 GBytes 10.7 Gbits/sec 0 2.70 MBytes
[ 4] 4.00-5.00 sec 1.30 GBytes 11.1 Gbits/sec 0 2.81 MBytes
[ 4] 5.00-6.00 sec 1.55 GBytes 13.3 Gbits/sec 0 2.86 MBytes
[ 4] 6.00-7.00 sec 1.46 GBytes 12.6 Gbits/sec 0 2.92 MBytes
[ 4] 7.00-8.00 sec 1.41 GBytes 12.1 Gbits/sec 0 2.97 MBytes
[ 4] 8.00-9.00 sec 1.39 GBytes 12.0 Gbits/sec 0 2.98 MBytes
[ 4] 9.00-10.00 sec 1.46 GBytes 12.5 Gbits/sec 0 3.00 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 14.2 GBytes 12.2 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 14.1 GBytes 12.2 Gbits/sec receiver
iperf Done.
$ iperf3 -c 10.13.1.16 -R
Connecting to host 10.13.1.16, port 5201
Reverse mode, remote host 10.13.1.16 is sending
[ 4] local 10.13.1.17 port 47246 connected to 10.13.1.16 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 1.63 GBytes 14.0 Gbits/sec
[ 4] 1.00-2.00 sec 1.63 GBytes 14.0 Gbits/sec
[ 4] 2.00-3.00 sec 1.56 GBytes 13.4 Gbits/sec
[ 4] 3.00-4.00 sec 1.24 GBytes 10.7 Gbits/sec
[ 4] 4.00-5.00 sec 1.51 GBytes 13.0 Gbits/sec
[ 4] 5.00-6.00 sec 1.40 GBytes 12.0 Gbits/sec
[ 4] 6.00-7.00 sec 1.49 GBytes 12.8 Gbits/sec
[ 4] 7.00-8.00 sec 1.58 GBytes 13.6 Gbits/sec
[ 4] 8.00-9.00 sec 1.45 GBytes 12.4 Gbits/sec
[ 4] 9.00-10.00 sec 1.47 GBytes 12.6 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 15.0 GBytes 12.9 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 15.0 GBytes 12.9 Gbits/sec receiver
iperf Done.
Looks good, right? Let's see the what it has given to us...
$ for i in {1..5}; do { dd if=/dev/zero of=/mnt/tmp/test.tmp bs=1M count=10 oflag=sync; rm -f /mnt/tmp/test.tmp; } done 2>&1 | grep copied
10485760 bytes (10 MB) copied, 0.403512 s, 26.0 MB/s
10485760 bytes (10 MB) copied, 0.354702 s, 29.6 MB/s
10485760 bytes (10 MB) copied, 0.386806 s, 27.1 MB/s
10485760 bytes (10 MB) copied, 0.405671 s, 25.8 MB/s
10485760 bytes (10 MB) copied, 0.426986 s, 24.6 MB/s
So, the network can do ~10 Gbit/s, the disk can do ~2 Gbit/s, the
GlusterFS can do ~0.2 Gbit/s.
Am I the only one so lucky? :-)
Does anyone else observe the samephenomenon?
On Wed, Jul 03, 2019 at 05:01:37PM -0400, Guy Boisvert wrote:
> Yeah, 10 Gbps is affordable these days, even 25 Gbps! Wouldn't go lower than 10 Gbps.
>
> Get BlueMail for Android
>
> On Jul 3, 2019, 16:59, at 16:59, Marcus Schopen <lists at localguru.de> wrote:
> >Hi,
> >
> >Am Mittwoch, den 03.07.2019, 15:16 -0400 schrieb Dmitry Filonov:
> >> Well, if your network is limited to 100MB/s then it doesn't matter if
> >> storage is capable of doing 300+MB/s.
> >> But 15 MB/s is still way less than 100 MB/s
> >
> >What network is recommended in the backend, 10 Gigabit or better more?
> >
> >Ciao!
> >Marcus
> >
> >
> >_______________________________________________
> >Gluster-users mailing list
> >Gluster-users at gluster.org
> >https://lists.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list