<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html>
<head>
<meta name="Generator" content="Zarafa WebAccess v7.1.14-51822">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>RE: [Gluster-users] Gluster native mount is really slow compared to nfs</title>
<style type="text/css">
body
{
font-family: Arial, Verdana, Sans-Serif ! important;
font-size: 12px;
padding: 5px 5px 5px 5px;
margin: 0px;
border-style: none;
background-color: #ffffff;
}
p, ul, li
{
margin-top: 0px;
margin-bottom: 0px;
}
</style>
</head>
<body>
<p>Hello,</p><p> </p><p> </p><p>While there are probably other interesting parameters and options in gluster itself, for us the largest difference with this speedtest and also for our website (real world performance) was the negative-timeout value during mount. Only 1 seems to solve so many problems, is there anyone knowledgeable why this is the case? </p><p> </p><p>This would better be default I suppose ... </p><p> </p><p>I'm still wondering if there is a big underlying issue in gluster causing the difference to be so gigantic.</p><p><br /><br />Regards</p><p>Jo</p><p> </p><p><br /> </p><blockquote style="border-left: 2px solid #325FBA; padding-left: 5px;margin-left:5px;">-----Original message-----<br /><strong>From:</strong>        Jo Goossens <jo.goossens@hosted-power.com><br /><strong>Sent:</strong>        Tue 11-07-2017 18:48<br /><strong>Subject:</strong>        RE: [Gluster-users] Gluster native mount is really slow compared to nfs<br /><strong>CC:</strong>        gluster-users@gluster.org; <br /><strong>To:</strong>        Vijay Bellur <vbellur@redhat.com>; <br /><style type="text/css">body { font-family: monospace; }</style> <style type="text/css"> .bodyclass { font-family: Arial, Verdana, Sans-Serif ! important; font-size: 12px; padding: 5px 5px 5px 5px; margin: 0px; border-style: none; background-color: #ffffff; } p, ul, li { margin-top: 0px; margin-bottom: 0px; } </style> <div><p>PS: I just tested between these 2:</p><p> </p><div>mount -t glusterfs -o <b>negative-timeout=1</b>,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www</div><div> </div><div>mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www</div><div> </div><div>So it means only 1 second negative timeout...</div><div> </div><p>In this particular test: ./smallfile_cli.py --top /var/www/test --host-set 192.168.140.41 --threads 8 --files 50000 --file-size 64 --record-size 64</p><div> </div><div> </div><div>The result is about 4 seconds with the negative timeout of 1 second defined and <u>many many minutes without the negative timeout</u> (I quit after 15 minutes of waiting)</div><div> </div><div>I will go over to some real world tests now to see how it performs there.</div><div> </div><div> </div><div>Regards</div><div>Jo</div><p> </p><p> </p><p><br /> </p><blockquote style="border-left: 2px solid #325FBA; padding-left: 5px;margin-left:5px;">-----Original message-----<br /><strong>From:</strong>        Jo Goossens <jo.goossens@hosted-power.com><br /><strong>Sent:</strong>        Tue 11-07-2017 18:23<br /><strong>Subject:</strong>        Re: [Gluster-users] Gluster native mount is really slow compared to nfs<br /><strong>To:</strong>        Vijay Bellur <vbellur@redhat.com>; <br /><strong>CC:</strong>        gluster-users@gluster.org; <br /><style type="text/css">.bodyclass { font-family: monospace; }</style> <style type="text/css"> .bodyclass { font-family: Arial, Verdana, Sans-Serif ! important; font-size: 12px; padding: 5px 5px 5px 5px; margin: 0px; border-style: none; background-color: #ffffff; } p, ul, li { margin-top: 0px; margin-bottom: 0px; } </style> <div><p>Hello Vijay,</p><p> </p><p> </p><p>What do you mean exactly? What info is missing?</p><p> </p><p>PS: I already found out that for this particular test all the difference is made by : negative-timeout=600 , when removing it, it's much much slower again.</p><p> </p><p> </p><p>Regards</p><p>Jo<br /> </p><blockquote style="border-left: 2px solid #325FBA; padding-left: 5px;margin-left:5px;">-----Original message-----<br /><strong>From:</strong>        Vijay Bellur <vbellur@redhat.com><br /><strong>Sent:</strong>        Tue 11-07-2017 18:16<br /><strong>Subject:</strong>        Re: [Gluster-users] Gluster native mount is really slow compared to nfs<br /><strong>To:</strong>        Jo Goossens <jo.goossens@hosted-power.com>; <br /><strong>CC:</strong>        gluster-users@gluster.org; Joe Julian <joe@julianfamily.org>; <br /><style type="text/css">.bodyclass { font-family: monospace; }</style> <div dir="ltr"><br /><div><br /><div>On Tue, Jul 11, 2017 at 11:39 AM, Jo Goossens <span dir="ltr"><<a href="mailto:jo.goossens@hosted-power.com" onclick="parent.webclient.openWindow(this, 'createmail', 'index.php?load=dialog&task=createmail_standard&to=jo.goossens@hosted-power.com'); return false;" title="This external link will open in a new window" target="_blank">jo.goossens@hosted-power.com</a>></span> wrote:<br /><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div><p>Hello Joe,</p><p> </p><p> </p><p>I just did a mount like this (added the bold):</p><p> </p><div>mount -t glusterfs -o <b>attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache</b>,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www</div><div> </div><p>Results:</p><p> </p><div>root@app1:~/smallfile-master# ./smallfile_cli.py --top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64 --record-size 64</div><div><div><div>smallfile version 3.0</div><div> hosts in test : ['192.168.140.41']</div><div> top test directory(s) : ['/var/www/test']</div><div> operation : cleanup</div><div> files/thread : 5000</div><div> threads : 8</div><div> record size (KB, 0 = maximum) : 64</div><div> file size (KB) : 64</div><div> file size distribution : fixed</div><div> files per dir : 100</div><div> dirs per dir : 10</div><div> threads share directories? : N</div><div> filename prefix :</div><div> filename suffix :</div><div> hash file number into dir.? : N</div><div> fsync after modify? : N</div><div> pause between files (microsec) : 0</div><div> finish all requests? : Y</div><div> stonewall? : Y</div><div> measure response times? : N</div><div> verify read? : Y</div><div> verbose? : False</div><div> log to stderr? : False</div><div> ext.attr.size : 0</div><div> ext.attr.count : 0</div><div> permute host directories? : N</div><div> remote program directory : /root/smallfile-master</div><div> network thread sync. dir. : /var/www/test/network_shared</div><div>starting all threads by creating starting gate file /var/www/test/network_shared/starting_gate.tmp</div></div></div><div>host = 192.168.140.41,thr = 00,elapsed = 1.232004,files = 5000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 01,elapsed = 1.148738,files = 5000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 02,elapsed = 1.130913,files = 5000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 03,elapsed = 1.183088,files = 5000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 04,elapsed = 1.220752,files = 5000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 05,elapsed = 1.228039,files = 5000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 06,elapsed = 1.216787,files = 5000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 07,elapsed = 1.229036,files = 5000,records = 0,status = ok</div><div>total threads = 8</div><div>total files = 40000</div><div>100.00% of requested files processed, minimum is 70.00</div><div>1.232004 sec elapsed time</div><div>32467.428972 files/sec</div><div> </div><p> </p><div>root@app1:~/smallfile-master# ./smallfile_cli.py --top /var/www/test --host-set 192.168.140.41 --threads 8 --files 50000 --file-size 64 --record-size 64</div><div>smallfile version 3.0</div><div> hosts in test : ['192.168.140.41']</div><div> top test directory(s) : ['/var/www/test']</div><div> operation : cleanup</div><div> files/thread : 50000</div><div> threads : 8</div><div> record size (KB, 0 = maximum) : 64</div><div> file size (KB) : 64</div><div> file size distribution : fixed</div><div> files per dir : 100</div><div> dirs per dir : 10</div><div> threads share directories? : N</div><div> filename prefix :</div><div> filename suffix :</div><div> hash file number into dir.? : N</div><div> fsync after modify? : N</div><div> pause between files (microsec) : 0</div><div> finish all requests? : Y</div><div> stonewall? : Y</div><div> measure response times? : N</div><div> verify read? : Y</div><div> verbose? : False</div><div> log to stderr? : False</div><div> ext.attr.size : 0</div><div> ext.attr.count : 0</div><div> permute host directories? : N</div><div> remote program directory : /root/smallfile-master</div><div> network thread sync. dir. : /var/www/test/network_shared</div><div>starting all threads by creating starting gate file /var/www/test/network_shared/starting_gate.tmp</div><div>host = 192.168.140.41,thr = 00,elapsed = 4.242312,files = 50000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 01,elapsed = 4.250831,files = 50000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 02,elapsed = 3.771269,files = 50000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 03,elapsed = 4.060653,files = 50000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 04,elapsed = 3.880653,files = 50000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 05,elapsed = 3.847107,files = 50000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 06,elapsed = 3.895537,files = 50000,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 07,elapsed = 3.966394,files = 50000,records = 0,status = ok</div><div>total threads = 8</div><div>total files = 400000</div><div>100.00% of requested files processed, minimum is 70.00</div><div>4.250831 sec elapsed time</div><div>94099.245073 files/sec</div><div>root@app1:~/smallfile-master#</div><div> </div><p> </p><p>As you can see it's now crazy fast, I think close to or faster than nfs !! What the hell!??!</p><p> </p><p>I'm so exited I already post. Any suggestions for those parameters? I will do additional testing over here , because this is ridiculous. That woud mean defaults or no good at all...</p><p> </p></div></blockquote><div> </div><div>Would it be possible to profile the client [1] with defaults and the set of options used now? That could help in understanding the performance delta better.</div><div> </div><div>Thanks,</div><div>Vijay</div><div> </div><div>[1] <a href="https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Performance%20Testing/#client-side-profiling" title="This external link will open in a new window" target="_blank">https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Performance%20Testing/#client-side-profiling</a> </div><div> </div></div></div></div> </blockquote></div> <pre>
_______________________________________________<br /> Gluster-users mailing list<br /> Gluster-users@gluster.org<br /> http://lists.gluster.org/mailman/listinfo/gluster-users</pre> </blockquote></div> </blockquote>
</body>
</html>