<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html>
<head>
<meta name="Generator" content="Zarafa WebAccess v7.1.14-51822">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>RE: [Gluster-users] Gluster native mount is really slow compared to nfs</title>
<style type="text/css">
body
{
font-family: Arial, Verdana, Sans-Serif ! important;
font-size: 12px;
padding: 5px 5px 5px 5px;
margin: 0px;
border-style: none;
background-color: #ffffff;
}
p, ul, li
{
margin-top: 0px;
margin-bottom: 0px;
}
</style>
</head>
<body>
<p>Hello,</p><p> </p><p> </p><p>Here is some speedtest with a new setup we just made with gluster 3.10, there are no other differences, except glusterfs versus nfs. The nfs is about 80 times faster:</p><p> </p><p> </p><div>root@app1:~/smallfile-master# mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www</div><div>root@app1:~/smallfile-master# ./smallfile_cli.py --top /var/www/test --host-set 192.168.140.41 --threads 8 --files 500 --file-size 64 --record-size 64</div><div>smallfile version 3.0</div><div> hosts in test : ['192.168.140.41']</div><div> top test directory(s) : ['/var/www/test']</div><div> operation : cleanup</div><div> files/thread : 500</div><div> threads : 8</div><div> record size (KB, 0 = maximum) : 64</div><div> file size (KB) : 64</div><div> file size distribution : fixed</div><div> files per dir : 100</div><div> dirs per dir : 10</div><div> threads share directories? : N</div><div> filename prefix :</div><div> filename suffix :</div><div> hash file number into dir.? : N</div><div> fsync after modify? : N</div><div> pause between files (microsec) : 0</div><div> finish all requests? : Y</div><div> stonewall? : Y</div><div> measure response times? : N</div><div> verify read? : Y</div><div> verbose? : False</div><div> log to stderr? : False</div><div> ext.attr.size : 0</div><div> ext.attr.count : 0</div><div> permute host directories? : N</div><div> remote program directory : /root/smallfile-master</div><div> network thread sync. dir. : /var/www/test/network_shared</div><div>starting all threads by creating starting gate file /var/www/test/network_shared/starting_gate.tmp</div><div>host = 192.168.140.41,thr = 00,elapsed = 68.845450,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 01,elapsed = 67.601088,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 02,elapsed = 58.677994,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 03,elapsed = 65.901922,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 04,elapsed = 66.971720,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 05,elapsed = 71.245102,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 06,elapsed = 67.574845,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 07,elapsed = 54.263242,files = 500,records = 0,status = ok</div><div>total threads = 8</div><div>total files = 4000</div><div>100.00% of requested files processed, minimum is 70.00</div><div>71.245102 sec elapsed time</div><div>56.144211 files/sec</div><div> </div><div>umount /var/www</div><div> </div><div>root@app1:~/smallfile-master# mount -t nfs -o tcp 192.168.140.41:/www /var/www</div><div>root@app1:~/smallfile-master# ./smallfile_cli.py --top /var/www/test --host-set 192.168.140.41 --threads 8 --files 500 --file-size 64 --record-size 64</div><div>smallfile version 3.0</div><div> hosts in test : ['192.168.140.41']</div><div> top test directory(s) : ['/var/www/test']</div><div> operation : cleanup</div><div> files/thread : 500</div><div> threads : 8</div><div> record size (KB, 0 = maximum) : 64</div><div> file size (KB) : 64</div><div> file size distribution : fixed</div><div> files per dir : 100</div><div> dirs per dir : 10</div><div> threads share directories? : N</div><div> filename prefix :</div><div> filename suffix :</div><div> hash file number into dir.? : N</div><div> fsync after modify? : N</div><div> pause between files (microsec) : 0</div><div> finish all requests? : Y</div><div> stonewall? : Y</div><div> measure response times? : N</div><div> verify read? : Y</div><div> verbose? : False</div><div> log to stderr? : False</div><div> ext.attr.size : 0</div><div> ext.attr.count : 0</div><div> permute host directories? : N</div><div> remote program directory : /root/smallfile-master</div><div> network thread sync. dir. : /var/www/test/network_shared</div><div>starting all threads by creating starting gate file /var/www/test/network_shared/starting_gate.tmp</div><div>host = 192.168.140.41,thr = 00,elapsed = 0.962424,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 01,elapsed = 0.942673,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 02,elapsed = 0.940622,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 03,elapsed = 0.915218,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 04,elapsed = 0.934349,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 05,elapsed = 0.922466,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 06,elapsed = 0.954381,files = 500,records = 0,status = ok</div><div>host = 192.168.140.41,thr = 07,elapsed = 0.946127,files = 500,records = 0,status = ok</div><div>total threads = 8</div><div>total files = 4000</div><div>100.00% of requested files processed, minimum is 70.00</div><div>0.962424 sec elapsed time</div><div>4156.173189 files/sec</div><div> </div><p> </p><p> </p><blockquote style="border-left: 2px solid #325FBA; padding-left: 5px;margin-left:5px;">-----Original message-----<br /><strong>From:</strong>        Jo Goossens <jo.goossens@hosted-power.com><br /><strong>Sent:</strong>        Tue 11-07-2017 11:26<br /><strong>Subject:</strong>        Re: [Gluster-users] Gluster native mount is really slow compared to nfs<br /><strong>To:</strong>        gluster-users@gluster.org; Soumya Koduri <skoduri@redhat.com>; <br /><strong>CC:</strong>        Ambarish Soman <asoman@redhat.com>; <br /><style type="text/css">body { font-family: monospace; }</style> <style type="text/css"> .bodyclass { font-family: Arial, Verdana, Sans-Serif ! important; font-size: 12px; padding: 5px 5px 5px 5px; margin: 0px; border-style: none; background-color: #ffffff; } p, ul, li { margin-top: 0px; margin-bottom: 0px; } </style> <div><p>Hi all,</p><p> </p><p> </p><p>One more thing, we have 3 apps servers with the gluster on it, replicated on 3 different gluster nodes. (So the gluster nodes are app servers at the same time). We could actually almost work locally if we wouldn't need to have the same files on the 3 nodes and redundancy :)</p><p> </p><p>Initial cluster was created like this:</p><p> </p><div>gluster volume create www replica 3 transport tcp 192.168.140.41:/gluster/www 192.168.140.42:/gluster/www 192.168.140.43:/gluster/www force</div><div>gluster volume set www network.ping-timeout 5</div><div>gluster volume set www performance.cache-size 1024MB</div><div>gluster volume set www nfs.disable on # No need for NFS currently</div><div>gluster volume start www</div><div> </div><div>To my understanding it still wouldn't explain why nfs has such great performance compared to native ...</div><div> </div><p> </p><p>Regards</p><p>Jo</p><p> </p><p><br /> </p><blockquote style="border-left: 2px solid #325FBA; padding-left: 5px;margin-left:5px;">-----Original message-----<br /><strong>From:</strong>        Soumya Koduri <skoduri@redhat.com><br /><strong>Sent:</strong>        Tue 11-07-2017 11:16<br /><strong>Subject:</strong>        Re: [Gluster-users] Gluster native mount is really slow compared to nfs<br /><strong>To:</strong>        Jo Goossens <jo.goossens@hosted-power.com>; gluster-users@gluster.org; <br /><strong>CC:</strong>        Ambarish Soman <asoman@redhat.com>; Karan Sandha <ksandha@redhat.com>; <br />+ Ambarish<br /><br />On 07/11/2017 02:31 PM, Jo Goossens wrote:<br />> Hello,<br />><br />><br />><br />><br />><br />> We tried tons of settings to get a php app running on a native gluster<br />> mount:<br />><br />><br />><br />> e.g.: 192.168.140.41:/www /var/www glusterfs<br />> defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable<br />> 0 0<br />><br />><br />><br />> I tried some mount variants in order to speed up things without luck.<br />><br />><br />><br />><br />><br />> After that I tried nfs (native gluster nfs 3 and ganesha nfs 4), it was<br />> a crazy performance difference.<br />><br />><br />><br />> e.g.: 192.168.140.41:/www /var/www nfs4 defaults,_netdev 0 0<br />><br />><br />><br />> I tried a test like this to confirm the slowness:<br />><br />><br />><br />> ./smallfile_cli.py --top /var/www/test --host-set 192.168.140.41<br />> --threads 8 --files 5000 --file-size 64 --record-size 64<br />><br />> This test finished in around 1.5 seconds with NFS and in more than 250<br />> seconds without nfs (can't remember exact numbers, but I reproduced it<br />> several times for both).<br />><br />> With the native gluster mount the php app had loading times of over 10<br />> seconds, with the nfs mount the php app loaded around 1 second maximum<br />> and even less. (reproduced several times)<br />><br />><br />><br />> I tried all kind of performance settings and variants of this but not<br />> helped , the difference stayed huge, here are some of the settings<br />> played with in random order:<br />><br /><br />Request Ambarish & Karan (cc'ed who have been working on evaluating <br />performance of various access protocols gluster supports) to look at the <br />below settings and provide inputs.<br /><br />Thanks,<br />Soumya<br /><br />><br />><br />> gluster volume set www features.cache-invalidation on<br />> gluster volume set www features.cache-invalidation-timeout 600<br />> gluster volume set www performance.stat-prefetch on<br />> gluster volume set www performance.cache-samba-metadata on<br />> gluster volume set www performance.cache-invalidation on<br />> gluster volume set www performance.md-cache-timeout 600<br />> gluster volume set www network.inode-lru-limit 250000<br />><br />> gluster volume set www performance.cache-refresh-timeout 60<br />> gluster volume set www performance.read-ahead disable<br />> gluster volume set www performance.readdir-ahead on<br />> gluster volume set www performance.parallel-readdir on<br />> gluster volume set www performance.write-behind-window-size 4MB<br />> gluster volume set www performance.io-thread-count 64<br />><br />> gluster volume set www performance.client-io-threads on<br />><br />> gluster volume set www performance.cache-size 1GB<br />> gluster volume set www performance.quick-read on<br />> gluster volume set www performance.flush-behind on<br />> gluster volume set www performance.write-behind on<br />> gluster volume set www nfs.disable on<br />><br />> gluster volume set www client.event-threads 3<br />> gluster volume set www server.event-threads 3<br />><br />><br />><br />><br />><br />><br />> The NFS ha adds a lot of complexity which we wouldn't need at all in our<br />> setup, could you please explain what is going on here? Is NFS the only<br />> solution to get acceptable performance? Did I miss one crucial settting<br />> perhaps?<br />><br />><br />><br />> We're really desperate, thanks a lot for your help!<br />><br />><br />><br />><br />><br />> PS: We tried with gluster 3.11 and 3.8 on Debian, both had terrible<br />> performance when not used with nfs.<br />><br />><br />><br />><br />><br />><br />><br />> Kind regards<br />><br />> Jo Goossens<br />><br />><br />><br />><br />><br />><br />><br />><br />><br />> _______________________________________________<br />> Gluster-users mailing list<br />> Gluster-users@gluster.org<br />> http://lists.gluster.org/mailman/listinfo/gluster-users<br />><br /></blockquote></div> <pre>
_______________________________________________<br /> Gluster-users mailing list<br /> Gluster-users@gluster.org<br /> http://lists.gluster.org/mailman/listinfo/gluster-users</pre> </blockquote>
</body>
</html>