<div dir="ltr">Hi Abi<div><br></div><div>Can you please share your current transfer speeds after you made the change?</div><div><br></div><div>Thank you.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 11, 2017 at 9:55 AM, Ben Turner <span dir="ltr"><<a href="mailto:bturner@redhat.com" target="_blank">bturner@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">----- Original Message -----<br>
> From: "Abi Askushi" <<a href="mailto:rightkicktech@gmail.com">rightkicktech@gmail.com</a>><br>
</span><span class="">> To: "Ben Turner" <<a href="mailto:bturner@redhat.com">bturner@redhat.com</a>><br>
> Cc: "Krutika Dhananjay" <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>>, "gluster-user" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
> Sent: Monday, September 11, 2017 1:40:42 AM<br>
> Subject: Re: [Gluster-users] Slow performance of gluster volume<br>
><br>
</span><span class="">> Did not upgrade yet gluster. I am stillĀ using 3.8.12. Only the mentioned<br>
> changes did provide the performance boost.<br>
><br>
> From which version to which version did you see such performance boost? I<br>
> will try to upgrade and check difference also.<br>
<br>
</span>Unfortunately I didn't record the package versions, I also may have done the same thing as you :)<br>
<span class="HOEnZb"><font color="#888888"><br>
-b<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
><br>
> On Sep 11, 2017 2:45 AM, "Ben Turner" <<a href="mailto:bturner@redhat.com">bturner@redhat.com</a>> wrote:<br>
><br>
> Great to hear!<br>
><br>
> ----- Original Message -----<br>
> > From: "Abi Askushi" <<a href="mailto:rightkicktech@gmail.com">rightkicktech@gmail.com</a>><br>
> > To: "Krutika Dhananjay" <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>><br>
> > Cc: "gluster-user" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
> > Sent: Friday, September 8, 2017 7:01:00 PM<br>
> > Subject: Re: [Gluster-users] Slow performance of gluster volume<br>
> ><br>
> > Following changes resolved the perf issue:<br>
> ><br>
> > Added the option<br>
> > /etc/glusterfs/glusterd.vol :<br>
> > option rpc-auth-allow-insecure on<br>
><br>
> Was it this setting or was it the gluster upgrade, do you know for sure?<br>
> It may be helpful to others to know for sure(Im interested too:).<br>
><br>
> -b<br>
><br>
> ><br>
> > restarted glusterd<br>
> ><br>
> > Then set the volume option:<br>
> > gluster volume set vms server.allow-insecure on<br>
> ><br>
> > I am reaching now the max network bandwidth and performance of VMs is<br>
> quite<br>
> > good.<br>
> ><br>
> > Did not upgrade the glusterd.<br>
> ><br>
> > As a next try I am thinking to upgrade gluster to 3.12 + test libgfapi<br>
> > integration of qemu by upgrading to ovirt 4.1.5 and check vm perf.<br>
> ><br>
> ><br>
> > On Sep 6, 2017 1:20 PM, "Abi Askushi" < <a href="mailto:rightkicktech@gmail.com">rightkicktech@gmail.com</a> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > I tried to follow step from<br>
> > <a href="https://wiki.centos.org/SpecialInterestGroup/Storage" rel="noreferrer" target="_blank">https://wiki.centos.org/<wbr>SpecialInterestGroup/Storage</a> to install latest<br>
> > gluster on the first node.<br>
> > It installed 3.10 and not 3.11. I am not sure how to install 3.11 without<br>
> > compiling it.<br>
> > Then when tried to start the gluster on the node the bricks were reported<br>
> > down (the other 2 nodes have still 3.8). No sure why. The logs were<br>
> showing<br>
> > the below (even after rebooting the server):<br>
> ><br>
> > [2017-09-06 10:56:09.023777] E [rpcsvc.c:557:rpcsvc_check_<wbr>and_reply_error]<br>
> > 0-rpcsvc: rpc actor failed to complete successfully<br>
> > [2017-09-06 10:56:09.024122] E [server-helpers.c:395:server_<wbr>alloc_frame]<br>
> > (-->/lib64/libgfrpc.so.0(<wbr>rpcsvc_handle_rpc_call+0x325) [0x7f2d0ec20905]<br>
> > -->/usr/lib64/glusterfs/3.10.<wbr>5/xlator/protocol/server.so(+<wbr>0x3006b)<br>
> > [0x7f2cfa4bf06b]<br>
> > -->/usr/lib64/glusterfs/3.10.<wbr>5/xlator/protocol/server.so(+<wbr>0xdb34)<br>
> > [0x7f2cfa49cb34] ) 0-server: invalid argument: client [Invalid argument]<br>
> ><br>
> > Do I need to upgrade all nodes before I attempt to start the gluster<br>
> > services?<br>
> > I reverted the first node back to 3.8 at the moment and all restored.<br>
> > Also tests with eager lock disabled did not make any difference.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > On Wed, Sep 6, 2017 at 11:15 AM, Krutika Dhananjay < <a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a> ><br>
> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > Do you see any improvement with 3.11.1 as that has a patch that improves<br>
> perf<br>
> > for this kind of a workload<br>
> ><br>
> > Also, could you disable eager-lock and check if that helps? I see that max<br>
> > time is being spent in acquiring locks.<br>
> ><br>
> > -Krutika<br>
> ><br>
> > On Wed, Sep 6, 2017 at 1:38 PM, Abi Askushi < <a href="mailto:rightkicktech@gmail.com">rightkicktech@gmail.com</a> ><br>
> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > Hi Krutika,<br>
> ><br>
> > Is it anything in the profile indicating what is causing this bottleneck?<br>
> In<br>
> > case i can collect any other info let me know.<br>
> ><br>
> > Thanx<br>
> ><br>
> > On Sep 5, 2017 13:27, "Abi Askushi" < <a href="mailto:rightkicktech@gmail.com">rightkicktech@gmail.com</a> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > Hi Krutika,<br>
> ><br>
> > Attached the profile stats. I enabled profiling then ran some dd tests.<br>
> Also<br>
> > 3 Windows VMs are running on top this volume but did not do any stress<br>
> > testing on the VMs. I have left the profiling enabled in case more time is<br>
> > needed for useful stats.<br>
> ><br>
> > Thanx<br>
> ><br>
> > On Tue, Sep 5, 2017 at 12:48 PM, Krutika Dhananjay < <a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a> ><br>
> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > OK my understanding is that with preallocated disks the performance with<br>
> and<br>
> > without shard will be the same.<br>
> ><br>
> > In any case, please attach the volume profile[1], so we can see what else<br>
> is<br>
> > slowing things down.<br>
> ><br>
> > -Krutika<br>
> ><br>
> > [1] -<br>
> > <a href="https://gluster.readthedocs.io/en/latest/Administrator%" rel="noreferrer" target="_blank">https://gluster.readthedocs.<wbr>io/en/latest/Administrator%</a><br>
> 20Guide/Monitoring%20Workload/<wbr>#running-glusterfs-volume-<wbr>profile-command<br>
> ><br>
> > On Tue, Sep 5, 2017 at 2:32 PM, Abi Askushi < <a href="mailto:rightkicktech@gmail.com">rightkicktech@gmail.com</a> ><br>
> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > Hi Krutika,<br>
> ><br>
> > I already have a preallocated disk on VM.<br>
> > Now I am checking performance with dd on the hypervisors which have the<br>
> > gluster volume configured.<br>
> ><br>
> > I tried also several values of shard-block-size and I keep getting the<br>
> same<br>
> > low values on write performance.<br>
> > Enabling client-io-threads also did not have any affect.<br>
> ><br>
> > The version of gluster I am using is glusterfs 3.8.12 built on May 11 2017<br>
> > 18:46:20.<br>
> > The setup is a set of 3 Centos 7.3 servers and ovirt 4.1, using gluster as<br>
> > storage.<br>
> ><br>
> > Below are the current settings:<br>
> ><br>
> ><br>
> > Volume Name: vms<br>
> > Type: Replicate<br>
> > Volume ID: 4513340d-7919-498b-bfe0-<wbr>d836b5cea40b<br>
> > Status: Started<br>
> > Snapshot Count: 0<br>
> > Number of Bricks: 1 x (2 + 1) = 3<br>
> > Transport-type: tcp<br>
> > Bricks:<br>
> > Brick1: gluster0:/gluster/vms/brick<br>
> > Brick2: gluster1:/gluster/vms/brick<br>
> > Brick3: gluster2:/gluster/vms/brick (arbiter)<br>
> > Options Reconfigured:<br>
> > server.event-threads: 4<br>
> > client.event-threads: 4<br>
> > performance.client-io-threads: on<br>
> > features.shard-block-size: 512MB<br>
> > cluster.granular-entry-heal: enable<br>
> > performance.strict-o-direct: on<br>
> > network.ping-timeout: 30<br>
> > storage.owner-gid: 36<br>
> > storage.owner-uid: 36<br>
> > user.cifs: off<br>
> > features.shard: on<br>
> > cluster.shd-wait-qlength: 10000<br>
> > cluster.shd-max-threads: 8<br>
> > cluster.locking-scheme: granular<br>
> > cluster.data-self-heal-<wbr>algorithm: full<br>
> > cluster.server-quorum-type: server<br>
> > cluster.quorum-type: auto<br>
> > cluster.eager-lock: enable<br>
> > network.remote-dio: off<br>
> > performance.low-prio-threads: 32<br>
> > performance.stat-prefetch: on<br>
> > performance.io-cache: off<br>
> > performance.read-ahead: off<br>
> > performance.quick-read: off<br>
> > transport.address-family: inet<br>
> > performance.readdir-ahead: on<br>
> > nfs.disable: on<br>
> > nfs.export-volumes: on<br>
> ><br>
> ><br>
> > I observed that when testing with dd if=/dev/zero of=testfile bs=1G<br>
> count=1 I<br>
> > get 65MB/s on the vms gluster volume (and the network traffic between the<br>
> > servers reaches ~ 500Mbps), while when testing with dd if=/dev/zero<br>
> > of=testfile bs=1G count=1 oflag=direct I get a consistent 10MB/s and the<br>
> > network traffic hardly reaching 100Mbps.<br>
> ><br>
> > Any other things one can do?<br>
> ><br>
> > On Tue, Sep 5, 2017 at 5:57 AM, Krutika Dhananjay < <a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a> ><br>
> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > I'm assuming you are using this volume to store vm images, because I see<br>
> > shard in the options list.<br>
> ><br>
> > Speaking from shard translator's POV, one thing you can do to improve<br>
> > performance is to use preallocated images.<br>
> > This will at least eliminate the need for shard to perform multiple steps<br>
> as<br>
> > part of the writes - such as creating the shard and then writing to it and<br>
> > then updating the aggregated file size - all of which require one network<br>
> > call each, which further get blown up once they reach AFR (replicate) into<br>
> > many more network calls.<br>
> ><br>
> > Second, I'm assuming you're using the default shard block size of 4MB (you<br>
> > can confirm this using `gluster volume get <VOL> shard-block-size`). In<br>
> our<br>
> > tests, we've found that larger shard sizes perform better. So maybe change<br>
> > the shard-block-size to 64MB (`gluster volume set <VOL> shard-block-size<br>
> > 64MB`).<br>
> ><br>
> > Third, keep stat-prefetch enabled. We've found that qemu sends quite a<br>
> lot of<br>
> > [f]stats which can be served from the (md)cache to improve performance. So<br>
> > enable that.<br>
> ><br>
> > Also, could you also enable client-io-threads and see if that improves<br>
> > performance?<br>
> ><br>
> > Which version of gluster are you using BTW?<br>
> ><br>
> > -Krutika<br>
> ><br>
> ><br>
> > On Tue, Sep 5, 2017 at 4:32 AM, Abi Askushi < <a href="mailto:rightkicktech@gmail.com">rightkicktech@gmail.com</a> ><br>
> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > Hi all,<br>
> ><br>
> > I have a gluster volume used to host several VMs (managed through oVirt).<br>
> > The volume is a replica 3 with arbiter and the 3 servers use 1 Gbit<br>
> network<br>
> > for the storage.<br>
> ><br>
> > When testing with dd (dd if=/dev/zero of=testfile bs=1G count=1<br>
> oflag=direct)<br>
> > out of the volume (e.g. writing at /root/) the performance of the dd is<br>
> > reported to be ~ 700MB/s, which is quite decent. When testing the dd on<br>
> the<br>
> > gluster volume I get ~ 43 MB/s which way lower from the previous. When<br>
> > testing with dd the gluster volume, the network traffic was not exceeding<br>
> > 450 Mbps on the network interface. I would expect to reach near 900 Mbps<br>
> > considering that there is 1 Gbit of bandwidth available. This results<br>
> having<br>
> > VMs with very slow performance (especially on their write operations).<br>
> ><br>
> > The full details of the volume are below. Any advise on what can be<br>
> tweaked<br>
> > will be highly appreciated.<br>
> ><br>
> > Volume Name: vms<br>
> > Type: Replicate<br>
> > Volume ID: 4513340d-7919-498b-bfe0-<wbr>d836b5cea40b<br>
> > Status: Started<br>
> > Snapshot Count: 0<br>
> > Number of Bricks: 1 x (2 + 1) = 3<br>
> > Transport-type: tcp<br>
> > Bricks:<br>
> > Brick1: gluster0:/gluster/vms/brick<br>
> > Brick2: gluster1:/gluster/vms/brick<br>
> > Brick3: gluster2:/gluster/vms/brick (arbiter)<br>
> > Options Reconfigured:<br>
> > cluster.granular-entry-heal: enable<br>
> > performance.strict-o-direct: on<br>
> > network.ping-timeout: 30<br>
> > storage.owner-gid: 36<br>
> > storage.owner-uid: 36<br>
> > user.cifs: off<br>
> > features.shard: on<br>
> > cluster.shd-wait-qlength: 10000<br>
> > cluster.shd-max-threads: 8<br>
> > cluster.locking-scheme: granular<br>
> > cluster.data-self-heal-<wbr>algorithm: full<br>
> > cluster.server-quorum-type: server<br>
> > cluster.quorum-type: auto<br>
> > cluster.eager-lock: enable<br>
> > network.remote-dio: off<br>
> > performance.low-prio-threads: 32<br>
> > performance.stat-prefetch: off<br>
> > performance.io-cache: off<br>
> > performance.read-ahead: off<br>
> > performance.quick-read: off<br>
> > transport.address-family: inet<br>
> > performance.readdir-ahead: on<br>
> > nfs.disable: on<br>
> > nfs.export-volumes: on<br>
> ><br>
> ><br>
> > Thanx,<br>
> > Alex<br>
> ><br>
> > ______________________________<wbr>_________________<br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > ______________________________<wbr>_________________<br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
><br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br></div>