[Gluster-users] Poor performance for nfs client on windows than on linux
Karan Sandha
ksandha at redhat.com
Tue May 9 12:41:28 UTC 2017
Hi Chiku,
Please tune the volume with the below parameters for performance gain.
cc'ed the guy working on windows.
**
*gluster volume stop *<vol-name>* --mode=script*
*
gluster volume set *<vol-name>* features.cache-invalidation on
gluster volume set *<vol-name>* features.cache-invalidation-timeout 600
gluster volume set *<vol-name>* performance.stat-prefetch on
gluster volume set *<vol-name>* performance.cache-invalidation on
gluster volume set *<vol-name>* performance.md-cache-timeout 600
gluster volume set *<vol-name>* network.inode-lru-limit 90000
gluster volume set *<vol-name>* cluster.lookup-optimize on
gluster volume set *<vol-name>* server.event-threads 4
gluster volume set *<vol-name>* client.event-threads 4
*
gluster volume start <vol-name>
Thanks & regards
Karan Sandha
On 05/09/2017 03:03 PM, Chiku wrote:
> Hello,
>
> I'm testing glusterfs for windows client.
> I created 2 servers for glusterfs (3.10.1 replication 2) on centos 7.3.
>
> Right now, I just use default setting and my testing use case is alot
> small files in a folder.
>
> nfs windows client is so poor performance than nfs linux client.
> Idon't understand. It should have same nfs linux performance.
> I saw something wierd about network traffic. On windows client I saw
> more receive (9Mbps) traffic than send traffic (1Mpbs).
>
> On nfs linux client, receive traffic is around 700Kbps.
>
> Can someone have any idea what happen with nfs windows client?
> I will try later some tunning tests.
>
>
>
>
> * 1st test: centos client mount with glusterfs type :
> gl1.lab.com:vol1 on /mnt/glusterfs type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>
> python smallfile_cli.py --operation create --threads 1 --file-size 30
> --files 5000 --files-per-dir 10000 --top /mnt/glusterfs/test1
> smallfile version 3.0
> hosts in test : None
> top test directory(s) : ['/mnt/glusterfs/test1']
> operation : create
> files/thread : 5000
> threads : 1
> record size (KB, 0 = maximum) : 0
> file size (KB) : 30
> file size distribution : fixed
> files per dir : 10000
> dirs per dir : 10
> threads share directories? : N
> filename prefix :
> filename suffix :
> hash file number into dir.? : N
> fsync after modify? : N
> pause between files (microsec) : 0
> finish all requests? : Y
> stonewall? : Y
> measure response times? : N
> verify read? : Y
> verbose? : False
> log to stderr? : False
> ext.attr.size : 0
> ext.attr.count : 0
> host = cm2.lab.com,thr = 00,elapsed = 16.566169,files = 5000,records =
> 5000,status = ok
> total threads = 1
> total files = 5000
> total data = 0.143 GB
> 100.00% of requested files processed, minimum is 90.00
> 16.566169 sec elapsed time
> 301.819932 files/sec
> 301.819932 IOPS
> 8.842381 MB/sec
>
> * 2nd test centos client mount with nfs :
> gl1.lab.com:/vol1 on /mnt/nfs type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.47.11,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=192.168.47.11)
>
> python smallfile_cli.py --operation create --threads 1 --file-size 30
> --files 5000 --files-per-dir 10000 --top /mnt/nfs/test1
> smallfile version 3.0
> hosts in test : None
> top test directory(s) : ['/mnt/nfs/test1']
> operation : create
> files/thread : 5000
> threads : 1
> record size (KB, 0 = maximum) : 0
> file size (KB) : 30
> file size distribution : fixed
> files per dir : 10000
> dirs per dir : 10
> threads share directories? : N
> filename prefix :
> filename suffix :
> hash file number into dir.? : N
> fsync after modify? : N
> pause between files (microsec) : 0
> finish all requests? : Y
> stonewall? : Y
> measure response times? : N
> verify read? : Y
> verbose? : False
> log to stderr? : False
> ext.attr.size : 0
> ext.attr.count : 0
> host = cm2.lab.com,thr = 00,elapsed = 54.737751,files = 5000,records =
> 5000,status = ok
> total threads = 1
> total files = 5000
> total data = 0.143 GB
> 100.00% of requested files processed, minimum is 90.00
> 54.737751 sec elapsed time
> 91.344637 files/sec
> 91.344637 IOPS
> 2.676112 MB/sec
>
>
> * 3th test: new windows 2012R2 with nfs client installed :
>
> C:\Users\Administrator\smallfile>smallfile_cli.py --operation create
> --threads 1 --file-size 30 --files 5000 --files-per-dir 10000 --top
> \\192.168.47.11\vol1\test1
> smallfile version 3.0
> hosts in test : None
> top test directory(s) :
> ['\\\\192.168.47.11\\vol1\\test1']
> operation : create
> files/thread : 5000
> threads : 1
> record size (KB, 0 = maximum) : 0
> file size (KB) : 30
> file size distribution : fixed
> files per dir : 10000
> dirs per dir : 10
> threads share directories? : N
> filename prefix :
> filename suffix :
> hash file number into dir.? : N
> fsync after modify? : N
> pause between files (microsec) : 0
> finish all requests? : Y
> stonewall? : Y
> measure response times? : N
> verify read? : Y
> verbose? : False
> log to stderr? : False
> ext.attr.size : 0
> ext.attr.count : 0
> adding time for Windows synchronization
> host = WIN-H8RKTO9B438,thr = 00,elapsed = 425.342000,files =
> 5000,records = 5000
> ,status = ok
> total threads = 1
> total files = 5000
> total data = 0.143 GB
> 100.00% of requested files processed, minimum is 90.00
> 425.342000 sec elapsed time
> 11.755246 files/sec
> 11.755246 IOPS
> 0.344392 MB/sec
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170509/598e8734/attachment.html>
More information about the Gluster-users
mailing list