<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0cm;
        margin-bottom:.0001pt;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:purple;
        text-decoration:underline;}
p.msonormal0, li.msonormal0, div.msonormal0
        {mso-style-name:msonormal;
        mso-margin-top-alt:auto;
        margin-right:0cm;
        mso-margin-bottom-alt:auto;
        margin-left:0cm;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
span.EmailStyle18
        {mso-style-type:personal;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
span.EmailStyle19
        {mso-style-type:personal-compose;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-family:"Calibri",sans-serif;
        mso-fareast-language:EN-US;}
@page WordSection1
        {size:612.0pt 792.0pt;
        margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
        {page:WordSection1;}
--></style></head><body lang=TR link=blue vlink=purple><div class=WordSection1><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Hi </span>Krutika<span style='mso-fareast-language:EN-US'>,<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'><o:p> </o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Sure, here is volume info:<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'><o:p> </o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>root@sr-09-loc-50-14-18:/# gluster volume info testvol<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'><o:p> </o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Volume Name: testvol<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Type: Distributed-Replicate<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Volume ID: 30426017-59d5-4091-b6bc-279a905b704a<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Status: Started<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Snapshot Count: 0<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Number of Bricks: 10 x 2 = 20<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Transport-type: tcp<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Bricks:<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick1: sr-09-loc-50-14-18:/bricks/brick1<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick2: sr-09-loc-50-14-18:/bricks/brick2<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick3: sr-09-loc-50-14-18:/bricks/brick3<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick4: sr-09-loc-50-14-18:/bricks/brick4<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick5: sr-09-loc-50-14-18:/bricks/brick5<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick6: sr-09-loc-50-14-18:/bricks/brick6<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick7: sr-09-loc-50-14-18:/bricks/brick7<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick8: sr-09-loc-50-14-18:/bricks/brick8<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick9: sr-09-loc-50-14-18:/bricks/brick9<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick10: sr-09-loc-50-14-18:/bricks/brick10<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick11: sr-10-loc-50-14-18:/bricks/brick1<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick12: sr-10-loc-50-14-18:/bricks/brick2<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick13: sr-10-loc-50-14-18:/bricks/brick3<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick14: sr-10-loc-50-14-18:/bricks/brick4<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick15: sr-10-loc-50-14-18:/bricks/brick5<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick16: sr-10-loc-50-14-18:/bricks/brick6<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick17: sr-10-loc-50-14-18:/bricks/brick7<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick18: sr-10-loc-50-14-18:/bricks/brick8<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick19: sr-10-loc-50-14-18:/bricks/brick9<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Brick20: sr-10-loc-50-14-18:/bricks/brick10<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>Options Reconfigured:<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>features.shard-block-size: 32MB<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>features.shard: on<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>transport.address-family: inet<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>nfs.disable: on<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'><o:p> </o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'>-Gencer.<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:EN-US'><o:p> </o:p></span></p><p class=MsoNormal><b><span lang=EN-US>From:</span></b><span lang=EN-US> Krutika Dhananjay [mailto:kdhananj@redhat.com] <br><b>Sent:</b> Friday, June 30, 2017 2:50 PM<br><b>To:</b> gencer@gencgiyen.com<br><b>Cc:</b> gluster-user <gluster-users@gluster.org><br><b>Subject:</b> Re: [Gluster-users] Very slow performance on Sharded GlusterFS<o:p></o:p></span></p><p class=MsoNormal><o:p> </o:p></p><div><div><p class=MsoNormal style='margin-bottom:12.0pt'>Could you please provide the volume-info output?<o:p></o:p></p></div><p class=MsoNormal>-Krutika<o:p></o:p></p></div><div><p class=MsoNormal><o:p> </o:p></p><div><p class=MsoNormal>On Fri, Jun 30, 2017 at 4:23 PM, <<a href="mailto:gencer@gencgiyen.com" target="_blank">gencer@gencgiyen.com</a>> wrote:<o:p></o:p></p><blockquote style='border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-right:0cm'><div><div><div><div><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Hi,<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>I have an 2 nodes with 20 bricks in total (10+10).<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>First test: <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>2 Nodes with Distributed – Striped – Replicated (2 x 2)<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>10GbE Speed between nodes<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>“dd” performance: 400mb/s and higher<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Downloading a large file from internet and directly to the gluster: 250-300mb/s<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Now same test without Stripe but with sharding. This results are same when I set shard size 4MB or 32MB. (Again 2x Replica here)<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Dd performance: 70mb/s<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Download directly to the gluster performance : 60mb/s<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Now, If we do this test twice at the same time (two dd or two doewnload at the same time) it goes below 25/mb each or slower.<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>I thought sharding is at least equal or a little slower (maybe?) but these results are terribly slow.<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>I tried tuning (cache, window-size etc..). Nothing helps.<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>GlusterFS 3.11 and Debian 9 used. Kernel also tuned. Disks are “xfs” and 4TB each.<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Is there any tweak/tuning out there to make it fast?<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Or is this an expected behavior? If its, It is unacceptable. So slow. I cannot use this on production as it is terribly slow. <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>The reason behind I use shard instead of stripe is i would like to eleminate files that bigger than brick size.<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Thanks,<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Gencer.<o:p></o:p></p></div></div></div></div><p class=MsoNormal><br>_______________________________________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br><a href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a><o:p></o:p></p></blockquote></div><p class=MsoNormal><o:p> </o:p></p></div></div></body></html>