<div dir="ltr">I've done some more testing with tc and introduced latency on one of my testservers. With 9ms latency artificially introduced using tc ( sudo tc qdisc add dev bond0 root netem delay 9ms ) to a testserver in the same DC as the disperse volume servers I get more or less the same throughput as I do when testing DC1 <-> DC2 (which has ~9ms ping).<div><br></div><div>I know distribute volumes were more sensitive to latency in the past. At least I can max out a 1gig link with 9-10ms latency when using distribute. Disperse seems to max at 12-14MB/s with 8-10ms latency.</div><div><br></div><div>ingard</div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-04-24 14:03 GMT+02:00 Ingard Mevåg <span dir="ltr"><<a href="mailto:ingard@jotta.no" target="_blank">ingard@jotta.no</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I can confirm mounting the disperse volume locally on one of the three servers i got 211 MB/s with dd if=/dev/zero of=./local.dd.test bs=1M count=10000.<div><br></div><div>Its not very good concidering 10gig network, but at least 20x better than 10-12MB/s</div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">2017-04-24 13:53 GMT+02:00 Pranith Kumar Karampuri <span dir="ltr"><<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>+Ashish<br><br></div>Ashish,<br></div> Could you help Ingard? Do let me know what you find.<br></div><div class="gmail_extra"><div><div class="m_5401723871657887810h5"><br><div class="gmail_quote">On Mon, Apr 24, 2017 at 4:50 PM, Ingard Mevåg <span dir="ltr"><<a href="mailto:ingard@jotta.no" target="_blank">ingard@jotta.no</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi. I can't see a fuse thread at all. Please see attached screenshot of top process with threads. Keep in mind this is from inside the container.</div><div class="gmail_extra"><div><div class="m_5401723871657887810m_-2513996891287787904h5"><br><div class="gmail_quote">2017-04-24 12:17 GMT+02:00 Pranith Kumar Karampuri <span dir="ltr"><<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">We were able to saturate hardware with EC as well. Could you check 'top' in threaded mode to see if fuse thread is saturated when you run dd?<br></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_5401723871657887810m_-2513996891287787904m_-7892545271720368784h5">On Mon, Apr 24, 2017 at 3:27 PM, Ingard Mevåg <span dir="ltr"><<a href="mailto:ingard@jotta.no" target="_blank">ingard@jotta.no</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_5401723871657887810m_-2513996891287787904m_-7892545271720368784h5"><div dir="ltr">Hi<div>I've been playing with disperse volumes the past week, and so far i can not get more than 12MB/s when i do a write test. I've tried a distributed volume on the same bricks and gotten close to gigabit speeds. iperf confirms gigabit speeds to all three servers in the storage pool.<br></div><div><br></div><div>The three storage servers have 10gig nics (connected to the same switch). The client is for a now a docker container in a 2nd DC (latency roughly 8-9 ms).</div><div><br></div><div><div>dpkg -l|grep -i gluster</div><div>ii glusterfs-client 3.10.1-ubuntu1~xenial1 amd64 clustered file-system (client package)</div><div>ii glusterfs-common 3.10.1-ubuntu1~xenial1 amd64 GlusterFS common libraries and translator modules</div><div>ii glusterfs-server 3.10.1-ubuntu1~xenial1 amd64 clustered file-system (server package)</div></div><div><div><br></div><div><div>$ gluster volume info</div><div><br></div><div>Volume Name: DFS-ARCHIVE-001</div><div>Type: Disperse</div><div>Volume ID: 1497bc85-cb47-4123-8f91-a07f55<wbr>c11dcc</div><div>Status: Started</div><div>Snapshot Count: 0</div><div>Number of Bricks: 1 x (4 + 2) = 6</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: dna-001:/mnt/data01/brick</div><div>Brick2: dna-001:/mnt/data02/brick</div><div>Brick3: dna-002:/mnt/data01/brick</div><div>Brick4: dna-002:/mnt/data02/brick</div><div>Brick5: dna-003:/mnt/data01/brick</div><div>Brick6: dna-003:/mnt/data02/brick</div><div>Options Reconfigured:</div><div>transport.address-family: inet</div><div>nfs.disable: on</div></div><div><br></div><div>Anyone know the reason for the slow speeds on disperse vs distribute?</div><div><br></div><div>kind regards</div><span class="m_5401723871657887810m_-2513996891287787904m_-7892545271720368784m_-191513882697792848HOEnZb"><font color="#888888"><div>ingard</div><div class="m_5401723871657887810m_-2513996891287787904m_-7892545271720368784m_-191513882697792848m_1716968822908985942gmail_signature"><div dir="ltr"><div><div dir="ltr"></div></div></div></div>
</font></span></div></div>
<br></div></div>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><span class="m_5401723871657887810m_-2513996891287787904m_-7892545271720368784HOEnZb"><font color="#888888"><br></font></span></blockquote></div><span class="m_5401723871657887810m_-2513996891287787904m_-7892545271720368784HOEnZb"><font color="#888888"><br><br clear="all"><br>-- <br><div class="m_5401723871657887810m_-2513996891287787904m_-7892545271720368784m_-191513882697792848gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</font></span></div>
</blockquote></div><br><br clear="all"><div><br></div></div></div><span class="m_5401723871657887810m_-2513996891287787904HOEnZb"><font color="#888888">-- <br><div class="m_5401723871657887810m_-2513996891287787904m_-7892545271720368784gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Ingard Mevåg<br></div>Driftssjef<div>Jottacloud <br> <br>Mobil: <a href="tel:+47%20450%2022%20834" value="+4745022834" target="_blank">+47 450 22 834</a><br>E-post: <a href="mailto:ingard@jottacloud.com" target="_blank">ingard@jottacloud.com</a><br>Webside: <a href="http://www.jottacloud.com" target="_blank">www.jottacloud.com</a></div></div></div></div></div>
</font></span></div>
</blockquote></div><br><br clear="all"><br></div></div><span class="m_5401723871657887810HOEnZb"><font color="#888888">-- <br><div class="m_5401723871657887810m_-2513996891287787904gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</font></span></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="m_5401723871657887810gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Ingard Mevåg<br></div>Driftssjef<div>Jottacloud <br> <br>Mobil: <a href="tel:+47%20450%2022%20834" value="+4745022834" target="_blank">+47 450 22 834</a><br>E-post: <a href="mailto:ingard@jottacloud.com" target="_blank">ingard@jottacloud.com</a><br>Webside: <a href="http://www.jottacloud.com" target="_blank">www.jottacloud.com</a></div></div></div></div></div>
</div>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Ingard Mevåg<br></div>Driftssjef<div>Jottacloud <br> <br>Mobil: +47 450 22 834<br>E-post: <a href="mailto:ingard@jottacloud.com" target="_blank">ingard@jottacloud.com</a><br>Webside: <a href="http://www.jottacloud.com" target="_blank">www.jottacloud.com</a></div></div></div></div></div>
</div>