<div dir="ltr"><div>I did it, then dit smallfile_cli.py test in new folder, and I got same performance</div><div><br></div><div>I checked more about create files on linux and windows with tcpdump:<br></div><div>with windows, each create file there an readdir and get the list of all the files in the directory so at the end the readdir is pretty huge.</div><div><br></div><div>I tried with samba and got same results too. But instead of nfs there is no receive traffic, only send traffic.</div><div>I start profiling the vol1 and got differences about linux and windows.</div><div>Windows for each create file do opendir, setattr, 4xreaddir</div><div><br></div><div><br></div><div><div>Volume Name: vol1<br></div><div>Type: Replicate</div><div>Volume ID: 1e48f990-c4db-44f0-a2d5-caa0cb1996de</div><div>Status: Stopped</div><div>Snapshot Count: 0</div><div>Number of Bricks: 1 x 2 = 2</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: gl1:/opt/data/glusterfs/brick1</div><div>Brick2: gl2:/opt/data/glusterfs/brick1</div><div><div>Options Reconfigured:</div><div>diagnostics.count-fop-hits: on</div><div>diagnostics.latency-measurement: on</div><div>nfs.disable: off</div><div>cluster.lookup-optimize: on</div><div>client.event-threads: 4</div><div>server.event-threads: 4</div><div>network.inode-lru-limit: 90000</div><div>performance.md-cache-timeout: 600</div><div>performance.cache-invalidation: on</div><div>features.cache-invalidation-timeout: 600</div><div>features.cache-invalidation: on</div><div>transport.address-family: inet</div></div></div><div><br></div><div><div>sudo python smallfile_cli.py --operation create --threads 1 --file-size 30 --files 5000 --files-per-dir 10000 --top /mnt/cifs/test5</div><div>smallfile version 3.0</div><div> hosts in test : None</div><div> top test directory(s) : ['/mnt/cifs/test5']</div><div> operation : create</div><div> files/thread : 5000</div><div> threads : 1</div><div> record size (KB, 0 = maximum) : 0</div><div> file size (KB) : 30</div><div> file size distribution : fixed</div><div> files per dir : 10000</div><div> dirs per dir : 10</div><div> threads share directories? : N</div><div> filename prefix :</div><div> filename suffix :</div><div> hash file number into dir.? : N</div><div> fsync after modify? : N</div><div> pause between files (microsec) : 0</div><div> finish all requests? : Y</div><div> stonewall? : Y</div><div> measure response times? : N</div><div> verify read? : Y</div><div> verbose? : False</div><div> log to stderr? : False</div><div> ext.attr.size : 0</div><div> ext.attr.count : 0</div><div>host = <a href="http://cm2.lab.com">cm2.lab.com</a>,thr = 00,elapsed = 40.276758,files = 5000,records = 5000,status = ok</div><div>total threads = 1</div><div>total files = 5000</div><div>total data = 0.143 GB</div><div>100.00% of requested files processed, minimum is 90.00</div><div>40.276758 sec elapsed time</div><div>124.141074 files/sec</div><div>124.141074 IOPS</div><div>3.636946 MB/sec</div><div><br></div><div><br></div><div>root@gl1 ~]# gluster volume profile vol1 info</div><div>Brick: gl1:/opt/data/glusterfs/brick1</div><div>-------------------------------------</div><div>Cumulative Stats:</div><div> Block Size: 8b+ 16384b+</div><div> No. of Reads: 0 0</div><div>No. of Writes: 1 5000</div><div> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop</div><div> --------- ----------- ----------- ----------- ------------ ----</div><div> 0.00 0.00 us 0.00 us 0.00 us 5002 RELEASE</div><div> 0.00 0.00 us 0.00 us 0.00 us 3 RELEASEDIR</div><div> 0.00 24.00 us 24.00 us 24.00 us 1 IPC</div><div> 0.00 76.00 us 76.00 us 76.00 us 1 STAT</div><div> 0.00 223.00 us 223.00 us 223.00 us 1 GETXATTR</div><div> 0.00 66.27 us 43.00 us 97.00 us 11 SETXATTR</div><div> 0.00 57.08 us 37.00 us 133.00 us 13 SETATTR</div><div> 0.01 155.09 us 125.00 us 215.00 us 11 MKDIR</div><div> 0.01 52.56 us 19.00 us 149.00 us 43 STATFS</div><div> 0.02 35.16 us 8.00 us 181.00 us 92 INODELK</div><div> 0.93 29.83 us 11.00 us 1064.00 us 5002 FLUSH</div><div> 1.28 41.10 us 12.00 us 1231.00 us 5002 LK</div><div> 1.96 31.42 us 10.00 us 1821.00 us 10002 FINODELK</div><div> 2.56 40.98 us 11.00 us 1270.00 us 10026 ENTRYLK</div><div> 3.27 104.96 us 57.00 us 3182.00 us 5001 WRITE</div><div> 6.07 97.23 us 47.00 us 1376.00 us 10002 FXATTROP</div><div> 14.78 58.54 us 26.00 us 35865.00 us 40477 LOOKUP</div><div> 69.09 2214.82 us 102.00 us 1710730.00 us 5002 CREATE</div><div><br></div><div> Duration: 189 seconds</div><div> Data Read: 0 bytes</div><div>Data Written: 153600008 bytes</div><div><br></div><div>Brick: gl2:/opt/data/glusterfs/brick1</div><div>-------------------------------------</div><div>Cumulative Stats:</div><div> Block Size: 8b+ 16384b+</div><div> No. of Reads: 0 0</div><div>No. of Writes: 1 5000</div><div> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop</div><div> --------- ----------- ----------- ----------- ------------ ----</div><div> 0.00 0.00 us 0.00 us 0.00 us 5002 RELEASE</div><div> 0.00 0.00 us 0.00 us 0.00 us 3 RELEASEDIR</div><div> 0.00 55.00 us 55.00 us 55.00 us 1 GETXATTR</div><div> 0.01 52.23 us 45.00 us 69.00 us 13 SETATTR</div><div> 0.01 74.00 us 52.00 us 167.00 us 11 SETXATTR</div><div> 0.03 160.55 us 142.00 us 211.00 us 11 MKDIR</div><div> 0.03 46.40 us 27.00 us 188.00 us 43 STATFS</div><div> 0.04 28.74 us 11.00 us 95.00 us 92 INODELK</div><div> 2.15 30.38 us 12.00 us 310.00 us 5002 FLUSH</div><div> 2.82 39.81 us 12.00 us 1160.00 us 5002 LK</div><div> 3.96 27.92 us 10.00 us 1354.00 us 10026 ENTRYLK</div><div> 4.53 31.97 us 9.00 us 1144.00 us 10002 FINODELK</div><div> 6.55 92.48 us 55.00 us 554.00 us 5001 WRITE</div><div> 12.57 88.74 us 57.00 us 528.00 us 10002 FXATTROP</div><div> 30.45 53.13 us 27.00 us 3289.00 us 40477 LOOKUP</div><div> 36.85 520.35 us 108.00 us 459446.00 us 5002 CREATE</div><div><br></div><div> Duration: 174 seconds</div><div> Data Read: 0 bytes</div><div>Data Written: 153600008 bytes</div><div><br></div><div><br></div></div><div><div>C:\Users\Administrator\smallfile>smallfile_cli.py --operation create --threads 1 --file-size 30 --files 5000 --files-per-dir 10000 --top \\192.168.47.11\gl\test5</div><div>smallfile version 3.0</div><div> hosts in test : None</div><div> top test directory(s) : ['\\\\192.168.47.11\\gl\\test5']</div><div> operation : create</div><div> files/thread : 5000</div><div> threads : 1</div><div> record size (KB, 0 = maximum) : 0</div><div> file size (KB) : 30</div><div> file size distribution : fixed</div><div> files per dir : 10000</div><div> dirs per dir : 10</div><div> threads share directories? : N</div><div> filename prefix :</div><div> filename suffix :</div><div> hash file number into dir.? : N</div><div> fsync after modify? : N</div><div> pause between files (microsec) : 0</div><div> finish all requests? : Y</div><div> stonewall? : Y</div><div> measure response times? : N</div><div> verify read? : Y</div><div> verbose? : False</div><div> log to stderr? : False</div><div> ext.attr.size : 0</div><div> ext.attr.count : 0</div><div>adding time for Windows synchronization</div><div>host = WIN-H8RKTO9B438,thr = 00,elapsed = 551.000000,files = 5000,records = 5000</div><div>,status = ok</div><div>total threads = 1</div><div>total files = 5000</div><div>total data = 0.143 GB</div><div>100.00% of requested files processed, minimum is 90.00</div><div>551.000000 sec elapsed time</div><div>9.074410 files/sec</div><div>9.074410 IOPS</div><div>0.265852 MB/sec</div><div><br></div><div>[root@gl1 ~]# gluster volume profile vol1 info</div><div>Brick: gl1:/opt/data/glusterfs/brick1</div><div>-------------------------------------</div><div>Cumulative Stats:</div><div> Block Size: 8b+ 16384b+</div><div> No. of Reads: 0 0</div><div>No. of Writes: 1 5000</div><div> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop</div><div> --------- ----------- ----------- ----------- ------------ ----</div><div> 0.00 0.00 us 0.00 us 0.00 us 5002 RELEASE</div><div> 0.00 0.00 us 0.00 us 0.00 us 5088 RELEASEDIR</div><div> 0.00 30.38 us 19.00 us 42.00 us 8 IPC</div><div> 0.00 63.00 us 11.00 us 227.00 us 5 GETXATTR</div><div> 0.00 47.58 us 20.00 us 77.00 us 12 STAT</div><div> 0.00 132.83 us 76.00 us 300.00 us 6 READDIR</div><div> 0.00 88.55 us 60.00 us 127.00 us 11 SETXATTR</div><div> 0.00 1065.45 us 144.00 us 9687.00 us 11 MKDIR</div><div> 0.01 50.94 us 20.00 us 238.00 us 603 STATFS</div><div> 0.05 48.47 us 12.00 us 945.00 us 4998 FSTAT</div><div> 0.05 49.54 us 8.00 us 382.00 us 5002 FLUSH</div><div> 0.05 51.86 us 1.00 us 674.00 us 5085 OPENDIR</div><div> 0.06 59.86 us 36.00 us 473.00 us 5012 SETATTR</div><div> 0.09 91.17 us 14.00 us 636.00 us 5002 LK</div><div> 0.10 51.62 us 9.00 us 786.00 us 10002 FINODELK</div><div> 0.11 108.88 us 60.00 us 1269.00 us 5001 WRITE</div><div> 0.11 55.86 us 8.00 us 1074.00 us 10090 INODELK</div><div> 0.20 100.30 us 46.00 us 913.00 us 10002 FXATTROP</div><div> 0.22 111.76 us 10.00 us 985.00 us 10026 ENTRYLK</div><div> 0.26 262.15 us 117.00 us 350840.00 us 5002 CREATE</div><div> 0.68 83.14 us 11.00 us 1940.00 us 41603 LOOKUP</div><div> 98.03 25526.30 us 99.00 us 913922.00 us 19613 READDIRP</div><div><br></div><div> Duration: 693 seconds</div><div> Data Read: 0 bytes</div><div>Data Written: 153600008 bytes</div><div><br></div><div>Brick: gl2:/opt/data/glusterfs/brick1</div><div>-------------------------------------</div><div>Cumulative Stats:</div><div> Block Size: 8b+ 16384b+</div><div> No. of Reads: 0 0</div><div>No. of Writes: 1 5000</div><div> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop</div><div> --------- ----------- ----------- ----------- ------------ ----</div><div> 0.00 0.00 us 0.00 us 0.00 us 5002 RELEASE</div><div> 0.00 0.00 us 0.00 us 0.00 us 5088 RELEASEDIR</div><div> 0.00 30.50 us 27.00 us 33.00 us 4 IPC</div><div> 0.01 136.60 us 20.00 us 550.00 us 5 GETXATTR</div><div> 0.01 74.45 us 51.00 us 122.00 us 11 SETXATTR</div><div> 0.02 182.17 us 104.00 us 469.00 us 6 READDIR</div><div> 0.03 181.64 us 144.00 us 273.00 us 11 MKDIR</div><div> 0.35 42.64 us 30.00 us 184.00 us 599 STATFS</div><div> 2.33 33.63 us 13.00 us 315.00 us 5002 FLUSH</div><div> 2.85 41.00 us 13.00 us 1463.00 us 5002 LK</div><div> 3.06 43.41 us 1.00 us 265.00 us 5085 OPENDIR</div><div> 3.80 54.66 us 34.00 us 344.00 us 5012 SETATTR</div><div> 3.87 27.66 us 10.00 us 330.00 us 10090 INODELK</div><div> 3.95 28.36 us 8.00 us 1001.00 us 10026 ENTRYLK</div><div> 4.64 33.46 us 10.00 us 1807.00 us 10002 FINODELK</div><div> 6.22 89.60 us 58.00 us 707.00 us 5001 WRITE</div><div> 12.66 91.19 us 59.00 us 834.00 us 10002 FXATTROP</div><div> 19.90 286.71 us 122.00 us 362603.00 us 5002 CREATE</div><div> 36.29 62.84 us 27.00 us 57324.00 us 41603 LOOKUP</div><div><br></div><div> Duration: 638 seconds</div><div> Data Read: 0 bytes</div><div>Data Written: 153600008 bytes</div><div><br></div></div><div><br></div><div><br></div><div><br></div><div>Le 09/05/2017 à 14:41, Karan Sandha a écrit :</div><div>> Hi Chiku,</div><div>></div><div>> Please tune the volume with the below parameters for performance gain.</div><div>> cc'ed the guy working on windows.</div><div>></div><div>> **</div><div>></div><div>> *gluster volume stop *<vol-name>* --mode=script*</div><div>></div><div>> *</div><div>></div><div>> gluster volume set *<vol-name>* features.cache-invalidation on</div><div>></div><div>> gluster volume set *<vol-name>* features.cache-invalidation-timeout 600</div><div>></div><div>> gluster volume set *<vol-name>* performance.stat-prefetch on</div><div>></div><div>> gluster volume set *<vol-name>* performance.cache-invalidation on</div><div>></div><div>> gluster volume set *<vol-name>* performance.md-cache-timeout 600</div><div>></div><div>> gluster volume set *<vol-name>* network.inode-lru-limit 90000</div><div>></div><div>> gluster volume set *<vol-name>* cluster.lookup-optimize on</div><div>></div><div>> gluster volume set *<vol-name>* server.event-threads 4</div><div>></div><div>> gluster volume set *<vol-name>* client.event-threads 4</div><div>></div><div>> *</div><div>></div><div>> gluster volume start <vol-name></div><div>></div><div>></div><div>> Thanks & regards</div><div>></div><div>> Karan Sandha</div><div>></div><div>></div><div>> On 05/09/2017 03:03 PM, Chiku wrote:</div><div>>> Hello,</div><div>>></div><div>>> I'm testing glusterfs for windows client.</div><div>>> I created 2 servers for glusterfs (3.10.1 replication 2) on centos 7.3.</div><div>>></div><div>>> Right now, I just use default setting and my testing use case is alot</div><div>>> small files in a folder.</div><div>>></div><div>>> nfs windows client is so poor performance than nfs linux client.</div><div>>> Idon't understand. It should have same nfs linux performance.</div><div>>> I saw something wierd about network traffic. On windows client I saw</div><div>>> more receive (9Mbps) traffic than send traffic (1Mpbs).</div><div>>></div><div>>> On nfs linux client, receive traffic is around 700Kbps.</div><div>>></div><div>>> Can someone have any idea what happen with nfs windows client?</div><div>>> I will try later some tunning tests.</div><div>>></div><div>>></div><div>>></div><div>>></div><div>>> * 1st test: centos client mount with glusterfs type :</div><div>>> gl1.lab.com:vol1 on /mnt/glusterfs type fuse.glusterfs</div><div>>> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)</div><div>>></div><div>>> python smallfile_cli.py --operation create --threads 1 --file-size 30</div><div>>> --files 5000 --files-per-dir 10000 --top /mnt/glusterfs/test1</div><div>>> smallfile version 3.0</div><div>>> hosts in test : None</div><div>>> top test directory(s) : ['/mnt/glusterfs/test1']</div><div>>> operation : create</div><div>>> files/thread : 5000</div><div>>> threads : 1</div><div>>> record size (KB, 0 = maximum) : 0</div><div>>> file size (KB) : 30</div><div>>> file size distribution : fixed</div><div>>> files per dir : 10000</div><div>>> dirs per dir : 10</div><div>>> threads share directories? : N</div><div>>> filename prefix :</div><div>>> filename suffix :</div><div>>> hash file number into dir.? : N</div><div>>> fsync after modify? : N</div><div>>> pause between files (microsec) : 0</div><div>>> finish all requests? : Y</div><div>>> stonewall? : Y</div><div>>> measure response times? : N</div><div>>> verify read? : Y</div><div>>> verbose? : False</div><div>>> log to stderr? : False</div><div>>> ext.attr.size : 0</div><div>>> ext.attr.count : 0</div><div>>> host = <a href="http://cm2.lab.com">cm2.lab.com</a>,thr = 00,elapsed = 16.566169,files = 5000,records =</div><div>>> 5000,status = ok</div><div>>> total threads = 1</div><div>>> total files = 5000</div><div>>> total data = 0.143 GB</div><div>>> 100.00% of requested files processed, minimum is 90.00</div><div>>> 16.566169 sec elapsed time</div><div>>> 301.819932 files/sec</div><div>>> 301.819932 IOPS</div><div>>> 8.842381 MB/sec</div><div>>></div><div>>> * 2nd test centos client mount with nfs :</div><div>>> gl1.lab.com:/vol1 on /mnt/nfs type nfs</div><div>>> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.47.11,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=192.168.47.11)</div><div>>></div><div>>> python smallfile_cli.py --operation create --threads 1 --file-size 30</div><div>>> --files 5000 --files-per-dir 10000 --top /mnt/nfs/test1</div><div>>> smallfile version 3.0</div><div>>> hosts in test : None</div><div>>> top test directory(s) : ['/mnt/nfs/test1']</div><div>>> operation : create</div><div>>> files/thread : 5000</div><div>>> threads : 1</div><div>>> record size (KB, 0 = maximum) : 0</div><div>>> file size (KB) : 30</div><div>>> file size distribution : fixed</div><div>>> files per dir : 10000</div><div>>> dirs per dir : 10</div><div>>> threads share directories? : N</div><div>>> filename prefix :</div><div>>> filename suffix :</div><div>>> hash file number into dir.? : N</div><div>>> fsync after modify? : N</div><div>>> pause between files (microsec) : 0</div><div>>> finish all requests? : Y</div><div>>> stonewall? : Y</div><div>>> measure response times? : N</div><div>>> verify read? : Y</div><div>>> verbose? : False</div><div>>> log to stderr? : False</div><div>>> ext.attr.size : 0</div><div>>> ext.attr.count : 0</div><div>>> host = <a href="http://cm2.lab.com">cm2.lab.com</a>,thr = 00,elapsed = 54.737751,files = 5000,records =</div><div>>> 5000,status = ok</div><div>>> total threads = 1</div><div>>> total files = 5000</div><div>>> total data = 0.143 GB</div><div>>> 100.00% of requested files processed, minimum is 90.00</div><div>>> 54.737751 sec elapsed time</div><div>>> 91.344637 files/sec</div><div>>> 91.344637 IOPS</div><div>>> 2.676112 MB/sec</div><div>>></div><div>>></div><div>>> * 3th test: new windows 2012R2 with nfs client installed :</div><div>>></div><div>>> C:\Users\Administrator\smallfile>smallfile_cli.py --operation create</div><div>>> --threads 1 --file-size 30 --files 5000 --files-per-dir 10000 --top</div><div>>> \\192.168.47.11\vol1\test1</div><div>>> smallfile version 3.0</div><div>>> hosts in test : None</div><div>>> top test directory(s) :</div><div>>> ['\\\\192.168.47.11\\vol1\\test1']</div><div>>> operation : create</div><div>>> files/thread : 5000</div><div>>> threads : 1</div><div>>> record size (KB, 0 = maximum) : 0</div><div>>> file size (KB) : 30</div><div>>> file size distribution : fixed</div><div>>> files per dir : 10000</div><div>>> dirs per dir : 10</div><div>>> threads share directories? : N</div><div>>> filename prefix :</div><div>>> filename suffix :</div><div>>> hash file number into dir.? : N</div><div>>> fsync after modify? : N</div><div>>> pause between files (microsec) : 0</div><div>>> finish all requests? : Y</div><div>>> stonewall? : Y</div><div>>> measure response times? : N</div><div>>> verify read? : Y</div><div>>> verbose? : False</div><div>>> log to stderr? : False</div><div>>> ext.attr.size : 0</div><div>>> ext.attr.count : 0</div><div>>> adding time for Windows synchronization</div><div>>> host = WIN-H8RKTO9B438,thr = 00,elapsed = 425.342000,files =</div><div>>> 5000,records = 5000</div><div>>> ,status = ok</div><div>>> total threads = 1</div><div>>> total files = 5000</div><div>>> total data = 0.143 GB</div><div>>> 100.00% of requested files processed, minimum is 90.00</div><div>>> 425.342000 sec elapsed time</div><div>>> 11.755246 files/sec</div><div>>> 11.755246 IOPS</div><div>>> 0.344392 MB/sec</div><div>>> _______________________________________________</div><div>>> Gluster-users mailing list</div><div>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a></div><div>>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a></div><div>></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-05-09 14:12 GMT+02:00 ChiKu <span dir="ltr"><<a href="mailto:chikulinu@gmail.com" target="_blank">chikulinu@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="m_4142608724711801139gmail-moz-text-flowed" style="font-family:-moz-fixed;font-size:14px" lang="x-unicode">Hello,
<br>
<br>I'm testing glusterfs for windows client.
<br>I created 2 servers for glusterfs (3.10.1 replication 2) on centos 7.3.
<br>
<br>Right now, I just use default setting and my testing use case is alot
small files in a folder.
<br>
<br>nfs windows client is so poor performance than nfs linux client. Idon't
understand. It should have same nfs linux performance.
<br>I saw something wierd about network traffic. On windows client I saw
more receive (9Mbps) traffic than send traffic (1Mpbs).
<br>
<br>On nfs linux client, receive traffic is around 700Kbps.
<br>
<br>Can someone have any idea what happen with nfs windows client?
<br>I will try later some tunning tests.
<br>
<br>
<br>
<br>
<br>* 1st test: centos client mount with glusterfs type :
<br>gl1.lab.com:vol1 on /mnt/glusterfs type fuse.glusterfs
(rw,relatime,user_id=0,group_<wbr>id=0,default_permissions,<wbr>allow_other,max_read=131072)
<br>
<br>python smallfile_cli.py --operation create --threads 1 --file-size 30
--files 5000 --files-per-dir 10000 --top /mnt/glusterfs/test1
<br>smallfile version 3.0
<br> hosts in test : None
<br> top test directory(s) : ['/mnt/glusterfs/test1']
<br> operation : create
<br> files/thread : 5000
<br> <wbr> threads : 1
<br> record size (KB, 0 = maximum) : 0
<br> file size (KB) : 30
<br> file size distribution : fixed
<br> files per dir : 10000
<br> dirs per dir : 10
<br> threads share directories? : N
<br> filename prefix :
<br> filename suffix :
<br> hash file number into dir.? : N
<br> fsync after modify? : N
<br> pause between files (microsec) : 0
<br> finish all requests? : Y
<br> stonewall? : Y
<br> measure response times? : N
<br> verify read? : Y
<br> <wbr> verbose? : False
<br> log to stderr? : False
<br> ext.attr.size : 0
<br> ext.attr.count : 0
<br>host = <a href="http://cm2.lab.com" target="_blank">cm2.lab.com</a>,thr = 00,elapsed = 16.566169,files = 5000,records =
5000,status = ok
<br>total threads = 1
<br>total files = 5000
<br>total data = 0.143 GB
<br>100.00% of requested files processed, minimum is 90.00
<br>16.566169 sec elapsed time
<br>301.819932 files/sec
<br>301.819932 IOPS
<br>8.842381 MB/sec
<br>
<br>* 2nd test centos client mount with nfs :
<br>gl1.lab.com:/vol1 on /mnt/nfs type nfs
(rw,relatime,vers=3,rsize=<wbr>1048576,wsize=1048576,namlen=<wbr>255,hard,proto=tcp,timeo=600,<wbr>retrans=2,sec=sys,mountaddr=<wbr>192.168.47.11,mountvers=3,<wbr>mountport=38465,mountproto=<wbr>tcp,local_lock=none,addr=192.<wbr>168.47.11)
<br>
<br>python smallfile_cli.py --operation create --threads 1 --file-size 30
--files 5000 --files-per-dir 10000 --top /mnt/nfs/test1
<br>smallfile version 3.0
<br> hosts in test : None
<br> top test directory(s) : ['/mnt/nfs/test1']
<br> operation : create
<br> files/thread : 5000
<br> <wbr> threads : 1
<br> record size (KB, 0 = maximum) : 0
<br> file size (KB) : 30
<br> file size distribution : fixed
<br> files per dir : 10000
<br> dirs per dir : 10
<br> threads share directories? : N
<br> filename prefix :
<br> filename suffix :
<br> hash file number into dir.? : N
<br> fsync after modify? : N
<br> pause between files (microsec) : 0
<br> finish all requests? : Y
<br> stonewall? : Y
<br> measure response times? : N
<br> verify read? : Y
<br> <wbr> verbose? : False
<br> log to stderr? : False
<br> ext.attr.size : 0
<br> ext.attr.count : 0
<br>host = <a href="http://cm2.lab.com" target="_blank">cm2.lab.com</a>,thr = 00,elapsed = 54.737751,files = 5000,records =
5000,status = ok
<br>total threads = 1
<br>total files = 5000
<br>total data = 0.143 GB
<br>100.00% of requested files processed, minimum is 90.00
<br>54.737751 sec elapsed time
<br>91.344637 files/sec
<br>91.344637 IOPS
<br>2.676112 MB/sec
<br>
<br>
<br>* 3th test: new windows 2012R2 with nfs client installed :
<br>
<br>C:\Users\Administrator\<wbr>smallfile>smallfile_cli.py --operation create
--threads 1 --file-size 30 --files 5000 --files-per-dir 10000 --top
\\192.168.47.11\vol1\test1
<br>smallfile version 3.0
<br> hosts in test : None
<br> top test directory(s) :
['\\\\192.168.47.11\\vol1\\<wbr>test1']
<br> operation : create
<br> files/thread : 5000
<br> <wbr> threads : 1
<br> record size (KB, 0 = maximum) : 0
<br> file size (KB) : 30
<br> file size distribution : fixed
<br> files per dir : 10000
<br> dirs per dir : 10
<br> threads share directories? : N
<br> filename prefix :
<br> filename suffix :
<br> hash file number into dir.? : N
<br> fsync after modify? : N
<br> pause between files (microsec) : 0
<br> finish all requests? : Y
<br> stonewall? : Y
<br> measure response times? : N
<br> verify read? : Y
<br> <wbr> verbose? : False
<br> log to stderr? : False
<br> ext.attr.size : 0
<br> ext.attr.count : 0
<br>adding time for Windows synchronization
<br>host = WIN-H8RKTO9B438,thr = 00,elapsed = 425.342000,files =
5000,records = 5000
<br>,status = ok
<br>total threads = 1
<br>total files = 5000
<br>total data = 0.143 GB
<br>100.00% of requested files processed, minimum is 90.00
<br>425.342000 sec elapsed time
<br>11.755246 files/sec
<br>11.755246 IOPS
<br>0.344392 MB/sec <br></div></div>
</blockquote></div><br></div>