<div dir="ltr"><div>Hi,</div><div><br></div><div>Thank you for the update, sorry for the delay.</div><div><br></div><div>I did some more tests, but couldn't see the behaviour of spiked network bandwidth usage when quick-read is on. After upgrading, have you remounted the clients? As in the fix will not be effective until the process is restarted.</div><div>If you have already restarted the client processes, then there must be something related to workload in the live system that is triggering a bug in quick-read. Would need wireshark capture if possible, to debug further.<br></div><div><br></div><div>Regards,</div><div>Poornima<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Apr 16, 2019 at 6:25 PM Hu Bert <<a href="mailto:revirii@googlemail.com">revirii@googlemail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Poornima,<br>
<br>
thx for your efforts. I made a couple of tests and the results are the<br>
same, so the options are not related. Anyway, i'm not able to<br>
reproduce the problem on my testing system, although the volume<br>
options are the same.<br>
<br>
About 1.5 hours ago i set performance.quick-read to on again and<br>
watched: load/iowait went up (not bad at the moment, little traffic),<br>
but network traffic went up - from <20 MBit/s up to 160 MBit/s. After<br>
deactivating quick-read traffic dropped to < 20 MBit/s again.<br>
<br>
munin graph: <a href="https://abload.de/img/network-client4s0kle.png" rel="noreferrer" target="_blank">https://abload.de/img/network-client4s0kle.png</a><br>
<br>
The 2nd peak is from the last test.<br>
<br>
<br>
Thx,<br>
Hubert<br>
<br>
Am Di., 16. Apr. 2019 um 09:43 Uhr schrieb Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>>:<br>
><br>
> In my first test on my testing setup the traffic was on a normal<br>
> level, so i thought i was "safe". But on my live system the network<br>
> traffic was a multiple of the traffic one would expect.<br>
> performance.quick-read was enabled in both, the only difference in the<br>
> volume options between live and testing are:<br>
><br>
> performance.read-ahead: testing on, live off<br>
> performance.io-cache: testing on, live off<br>
><br>
> I ran another test on my testing setup, deactivated both and copied 9<br>
> GB of data. Now the traffic went up as well, from before ~9-10 MBit/s<br>
> up to 100 MBit/s with both options off. Does performance.quick-read<br>
> require one of those options set to 'on'?<br>
><br>
> I'll start another test shortly, and activate on of those 2 options,<br>
> maybe there's a connection between those 3 options?<br>
><br>
><br>
> Best Regards,<br>
> Hubert<br>
><br>
> Am Di., 16. Apr. 2019 um 08:57 Uhr schrieb Poornima Gurusiddaiah<br>
> <<a href="mailto:pgurusid@redhat.com" target="_blank">pgurusid@redhat.com</a>>:<br>
> ><br>
> > Thank you for reporting this. I had done testing on my local setup and the issue was resolved even with quick-read enabled. Let me test it again.<br>
> ><br>
> > Regards,<br>
> > Poornima<br>
> ><br>
> > On Mon, Apr 15, 2019 at 12:25 PM Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>> wrote:<br>
> >><br>
> >> fyi: after setting performance.quick-read to off network traffic<br>
> >> dropped to normal levels, client load/iowait back to normal as well.<br>
> >><br>
> >> client: <a href="https://abload.de/img/network-client-afterihjqi.png" rel="noreferrer" target="_blank">https://abload.de/img/network-client-afterihjqi.png</a><br>
> >> server: <a href="https://abload.de/img/network-server-afterwdkrl.png" rel="noreferrer" target="_blank">https://abload.de/img/network-server-afterwdkrl.png</a><br>
> >><br>
> >> Am Mo., 15. Apr. 2019 um 08:33 Uhr schrieb Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>>:<br>
> >> ><br>
> >> > Good Morning,<br>
> >> ><br>
> >> > today i updated my replica 3 setup (debian stretch) from version 5.5<br>
> >> > to 5.6, as i thought the network traffic bug (#1673058) was fixed and<br>
> >> > i could re-activate 'performance.quick-read' again. See release notes:<br>
> >> ><br>
> >> > <a href="https://review.gluster.org/#/c/glusterfs/+/22538/" rel="noreferrer" target="_blank">https://review.gluster.org/#/c/glusterfs/+/22538/</a><br>
> >> > <a href="http://git.gluster.org/cgit/glusterfs.git/commit/?id=34a2347780c2429284f57232f3aabb78547a9795" rel="noreferrer" target="_blank">http://git.gluster.org/cgit/glusterfs.git/commit/?id=34a2347780c2429284f57232f3aabb78547a9795</a><br>
> >> ><br>
> >> > Upgrade went fine, and then i was watching iowait and network traffic.<br>
> >> > It seems that the network traffic went up after upgrade and<br>
> >> > reactivation of performance.quick-read. Here are some graphs:<br>
> >> ><br>
> >> > network client1: <a href="https://abload.de/img/network-clientfwj1m.png" rel="noreferrer" target="_blank">https://abload.de/img/network-clientfwj1m.png</a><br>
> >> > network client2: <a href="https://abload.de/img/network-client2trkow.png" rel="noreferrer" target="_blank">https://abload.de/img/network-client2trkow.png</a><br>
> >> > network server: <a href="https://abload.de/img/network-serverv3jjr.png" rel="noreferrer" target="_blank">https://abload.de/img/network-serverv3jjr.png</a><br>
> >> ><br>
> >> > gluster volume info: <a href="https://pastebin.com/ZMuJYXRZ" rel="noreferrer" target="_blank">https://pastebin.com/ZMuJYXRZ</a><br>
> >> ><br>
> >> > Just wondering if the network traffic bug really got fixed or if this<br>
> >> > is a new problem. I'll wait a couple of minutes and then deactivate<br>
> >> > performance.quick-read again, just to see if network traffic goes down<br>
> >> > to normal levels.<br>
> >> ><br>
> >> ><br>
> >> > Best regards,<br>
> >> > Hubert<br>
> >> _______________________________________________<br>
> >> Gluster-users mailing list<br>
> >> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> >> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>