[Gluster-users] Poor performance for nfs client on windows than on linux
ChiKu
chikulinu at gmail.com
Thu May 11 14:08:01 UTC 2017
I did it, then dit smallfile_cli.py test in new folder, and I got same
performance
I checked more about create files on linux and windows with tcpdump:
with windows, each create file there an readdir and get the list of all the
files in the directory so at the end the readdir is pretty huge.
I tried with samba and got same results too. But instead of nfs there is no
receive traffic, only send traffic.
I start profiling the vol1 and got differences about linux and windows.
Windows for each create file do opendir, setattr, 4xreaddir
Volume Name: vol1
Type: Replicate
Volume ID: 1e48f990-c4db-44f0-a2d5-caa0cb1996de
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gl1:/opt/data/glusterfs/brick1
Brick2: gl2:/opt/data/glusterfs/brick1
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
nfs.disable: off
cluster.lookup-optimize: on
client.event-threads: 4
server.event-threads: 4
network.inode-lru-limit: 90000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
transport.address-family: inet
sudo python smallfile_cli.py --operation create --threads 1 --file-size 30
--files 5000 --files-per-dir 10000 --top /mnt/cifs/test5
smallfile version 3.0
hosts in test : None
top test directory(s) : ['/mnt/cifs/test5']
operation : create
files/thread : 5000
threads : 1
record size (KB, 0 = maximum) : 0
file size (KB) : 30
file size distribution : fixed
files per dir : 10000
dirs per dir : 10
threads share directories? : N
filename prefix :
filename suffix :
hash file number into dir.? : N
fsync after modify? : N
pause between files (microsec) : 0
finish all requests? : Y
stonewall? : Y
measure response times? : N
verify read? : Y
verbose? : False
log to stderr? : False
ext.attr.size : 0
ext.attr.count : 0
host = cm2.lab.com,thr = 00,elapsed = 40.276758,files = 5000,records =
5000,status = ok
total threads = 1
total files = 5000
total data = 0.143 GB
100.00% of requested files processed, minimum is 90.00
40.276758 sec elapsed time
124.141074 files/sec
124.141074 IOPS
3.636946 MB/sec
root at gl1 ~]# gluster volume profile vol1 info
Brick: gl1:/opt/data/glusterfs/brick1
-------------------------------------
Cumulative Stats:
Block Size: 8b+ 16384b+
No. of Reads: 0 0
No. of Writes: 1 5000
%-latency Avg-latency Min-Latency Max-Latency No. of calls
Fop
--------- ----------- ----------- ----------- ------------
----
0.00 0.00 us 0.00 us 0.00 us 5002
RELEASE
0.00 0.00 us 0.00 us 0.00 us 3
RELEASEDIR
0.00 24.00 us 24.00 us 24.00 us 1
IPC
0.00 76.00 us 76.00 us 76.00 us 1
STAT
0.00 223.00 us 223.00 us 223.00 us 1
GETXATTR
0.00 66.27 us 43.00 us 97.00 us 11
SETXATTR
0.00 57.08 us 37.00 us 133.00 us 13
SETATTR
0.01 155.09 us 125.00 us 215.00 us 11
MKDIR
0.01 52.56 us 19.00 us 149.00 us 43
STATFS
0.02 35.16 us 8.00 us 181.00 us 92
INODELK
0.93 29.83 us 11.00 us 1064.00 us 5002
FLUSH
1.28 41.10 us 12.00 us 1231.00 us 5002
LK
1.96 31.42 us 10.00 us 1821.00 us 10002
FINODELK
2.56 40.98 us 11.00 us 1270.00 us 10026
ENTRYLK
3.27 104.96 us 57.00 us 3182.00 us 5001
WRITE
6.07 97.23 us 47.00 us 1376.00 us 10002
FXATTROP
14.78 58.54 us 26.00 us 35865.00 us 40477
LOOKUP
69.09 2214.82 us 102.00 us 1710730.00 us 5002
CREATE
Duration: 189 seconds
Data Read: 0 bytes
Data Written: 153600008 bytes
Brick: gl2:/opt/data/glusterfs/brick1
-------------------------------------
Cumulative Stats:
Block Size: 8b+ 16384b+
No. of Reads: 0 0
No. of Writes: 1 5000
%-latency Avg-latency Min-Latency Max-Latency No. of calls
Fop
--------- ----------- ----------- ----------- ------------
----
0.00 0.00 us 0.00 us 0.00 us 5002
RELEASE
0.00 0.00 us 0.00 us 0.00 us 3
RELEASEDIR
0.00 55.00 us 55.00 us 55.00 us 1
GETXATTR
0.01 52.23 us 45.00 us 69.00 us 13
SETATTR
0.01 74.00 us 52.00 us 167.00 us 11
SETXATTR
0.03 160.55 us 142.00 us 211.00 us 11
MKDIR
0.03 46.40 us 27.00 us 188.00 us 43
STATFS
0.04 28.74 us 11.00 us 95.00 us 92
INODELK
2.15 30.38 us 12.00 us 310.00 us 5002
FLUSH
2.82 39.81 us 12.00 us 1160.00 us 5002
LK
3.96 27.92 us 10.00 us 1354.00 us 10026
ENTRYLK
4.53 31.97 us 9.00 us 1144.00 us 10002
FINODELK
6.55 92.48 us 55.00 us 554.00 us 5001
WRITE
12.57 88.74 us 57.00 us 528.00 us 10002
FXATTROP
30.45 53.13 us 27.00 us 3289.00 us 40477
LOOKUP
36.85 520.35 us 108.00 us 459446.00 us 5002
CREATE
Duration: 174 seconds
Data Read: 0 bytes
Data Written: 153600008 bytes
C:\Users\Administrator\smallfile>smallfile_cli.py --operation create
--threads 1 --file-size 30 --files 5000 --files-per-dir 10000 --top
\\192.168.47.11\gl\test5
smallfile version 3.0
hosts in test : None
top test directory(s) : ['\\\\192.168.47.11\\gl\\test5']
operation : create
files/thread : 5000
threads : 1
record size (KB, 0 = maximum) : 0
file size (KB) : 30
file size distribution : fixed
files per dir : 10000
dirs per dir : 10
threads share directories? : N
filename prefix :
filename suffix :
hash file number into dir.? : N
fsync after modify? : N
pause between files (microsec) : 0
finish all requests? : Y
stonewall? : Y
measure response times? : N
verify read? : Y
verbose? : False
log to stderr? : False
ext.attr.size : 0
ext.attr.count : 0
adding time for Windows synchronization
host = WIN-H8RKTO9B438,thr = 00,elapsed = 551.000000,files = 5000,records =
5000
,status = ok
total threads = 1
total files = 5000
total data = 0.143 GB
100.00% of requested files processed, minimum is 90.00
551.000000 sec elapsed time
9.074410 files/sec
9.074410 IOPS
0.265852 MB/sec
[root at gl1 ~]# gluster volume profile vol1 info
Brick: gl1:/opt/data/glusterfs/brick1
-------------------------------------
Cumulative Stats:
Block Size: 8b+ 16384b+
No. of Reads: 0 0
No. of Writes: 1 5000
%-latency Avg-latency Min-Latency Max-Latency No. of calls
Fop
--------- ----------- ----------- ----------- ------------
----
0.00 0.00 us 0.00 us 0.00 us 5002
RELEASE
0.00 0.00 us 0.00 us 0.00 us 5088
RELEASEDIR
0.00 30.38 us 19.00 us 42.00 us 8
IPC
0.00 63.00 us 11.00 us 227.00 us 5
GETXATTR
0.00 47.58 us 20.00 us 77.00 us 12
STAT
0.00 132.83 us 76.00 us 300.00 us 6
READDIR
0.00 88.55 us 60.00 us 127.00 us 11
SETXATTR
0.00 1065.45 us 144.00 us 9687.00 us 11
MKDIR
0.01 50.94 us 20.00 us 238.00 us 603
STATFS
0.05 48.47 us 12.00 us 945.00 us 4998
FSTAT
0.05 49.54 us 8.00 us 382.00 us 5002
FLUSH
0.05 51.86 us 1.00 us 674.00 us 5085
OPENDIR
0.06 59.86 us 36.00 us 473.00 us 5012
SETATTR
0.09 91.17 us 14.00 us 636.00 us 5002
LK
0.10 51.62 us 9.00 us 786.00 us 10002
FINODELK
0.11 108.88 us 60.00 us 1269.00 us 5001
WRITE
0.11 55.86 us 8.00 us 1074.00 us 10090
INODELK
0.20 100.30 us 46.00 us 913.00 us 10002
FXATTROP
0.22 111.76 us 10.00 us 985.00 us 10026
ENTRYLK
0.26 262.15 us 117.00 us 350840.00 us 5002
CREATE
0.68 83.14 us 11.00 us 1940.00 us 41603
LOOKUP
98.03 25526.30 us 99.00 us 913922.00 us 19613
READDIRP
Duration: 693 seconds
Data Read: 0 bytes
Data Written: 153600008 bytes
Brick: gl2:/opt/data/glusterfs/brick1
-------------------------------------
Cumulative Stats:
Block Size: 8b+ 16384b+
No. of Reads: 0 0
No. of Writes: 1 5000
%-latency Avg-latency Min-Latency Max-Latency No. of calls
Fop
--------- ----------- ----------- ----------- ------------
----
0.00 0.00 us 0.00 us 0.00 us 5002
RELEASE
0.00 0.00 us 0.00 us 0.00 us 5088
RELEASEDIR
0.00 30.50 us 27.00 us 33.00 us 4
IPC
0.01 136.60 us 20.00 us 550.00 us 5
GETXATTR
0.01 74.45 us 51.00 us 122.00 us 11
SETXATTR
0.02 182.17 us 104.00 us 469.00 us 6
READDIR
0.03 181.64 us 144.00 us 273.00 us 11
MKDIR
0.35 42.64 us 30.00 us 184.00 us 599
STATFS
2.33 33.63 us 13.00 us 315.00 us 5002
FLUSH
2.85 41.00 us 13.00 us 1463.00 us 5002
LK
3.06 43.41 us 1.00 us 265.00 us 5085
OPENDIR
3.80 54.66 us 34.00 us 344.00 us 5012
SETATTR
3.87 27.66 us 10.00 us 330.00 us 10090
INODELK
3.95 28.36 us 8.00 us 1001.00 us 10026
ENTRYLK
4.64 33.46 us 10.00 us 1807.00 us 10002
FINODELK
6.22 89.60 us 58.00 us 707.00 us 5001
WRITE
12.66 91.19 us 59.00 us 834.00 us 10002
FXATTROP
19.90 286.71 us 122.00 us 362603.00 us 5002
CREATE
36.29 62.84 us 27.00 us 57324.00 us 41603
LOOKUP
Duration: 638 seconds
Data Read: 0 bytes
Data Written: 153600008 bytes
Le 09/05/2017 à 14:41, Karan Sandha a écrit :
> Hi Chiku,
>
> Please tune the volume with the below parameters for performance gain.
> cc'ed the guy working on windows.
>
> **
>
> *gluster volume stop *<vol-name>* --mode=script*
>
> *
>
> gluster volume set *<vol-name>* features.cache-invalidation on
>
> gluster volume set *<vol-name>* features.cache-invalidation-timeout 600
>
> gluster volume set *<vol-name>* performance.stat-prefetch on
>
> gluster volume set *<vol-name>* performance.cache-invalidation on
>
> gluster volume set *<vol-name>* performance.md-cache-timeout 600
>
> gluster volume set *<vol-name>* network.inode-lru-limit 90000
>
> gluster volume set *<vol-name>* cluster.lookup-optimize on
>
> gluster volume set *<vol-name>* server.event-threads 4
>
> gluster volume set *<vol-name>* client.event-threads 4
>
> *
>
> gluster volume start <vol-name>
>
>
> Thanks & regards
>
> Karan Sandha
>
>
> On 05/09/2017 03:03 PM, Chiku wrote:
>> Hello,
>>
>> I'm testing glusterfs for windows client.
>> I created 2 servers for glusterfs (3.10.1 replication 2) on centos 7.3.
>>
>> Right now, I just use default setting and my testing use case is alot
>> small files in a folder.
>>
>> nfs windows client is so poor performance than nfs linux client.
>> Idon't understand. It should have same nfs linux performance.
>> I saw something wierd about network traffic. On windows client I saw
>> more receive (9Mbps) traffic than send traffic (1Mpbs).
>>
>> On nfs linux client, receive traffic is around 700Kbps.
>>
>> Can someone have any idea what happen with nfs windows client?
>> I will try later some tunning tests.
>>
>>
>>
>>
>> * 1st test: centos client mount with glusterfs type :
>> gl1.lab.com:vol1 on /mnt/glusterfs type fuse.glusterfs
>>
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>>
>> python smallfile_cli.py --operation create --threads 1 --file-size 30
>> --files 5000 --files-per-dir 10000 --top /mnt/glusterfs/test1
>> smallfile version 3.0
>> hosts in test : None
>> top test directory(s) : ['/mnt/glusterfs/test1']
>> operation : create
>> files/thread : 5000
>> threads : 1
>> record size (KB, 0 = maximum) : 0
>> file size (KB) : 30
>> file size distribution : fixed
>> files per dir : 10000
>> dirs per dir : 10
>> threads share directories? : N
>> filename prefix :
>> filename suffix :
>> hash file number into dir.? : N
>> fsync after modify? : N
>> pause between files (microsec) : 0
>> finish all requests? : Y
>> stonewall? : Y
>> measure response times? : N
>> verify read? : Y
>> verbose? : False
>> log to stderr? : False
>> ext.attr.size : 0
>> ext.attr.count : 0
>> host = cm2.lab.com,thr = 00,elapsed = 16.566169,files = 5000,records =
>> 5000,status = ok
>> total threads = 1
>> total files = 5000
>> total data = 0.143 GB
>> 100.00% of requested files processed, minimum is 90.00
>> 16.566169 sec elapsed time
>> 301.819932 files/sec
>> 301.819932 IOPS
>> 8.842381 MB/sec
>>
>> * 2nd test centos client mount with nfs :
>> gl1.lab.com:/vol1 on /mnt/nfs type nfs
>>
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.47.11,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=192.168.47.11)
>>
>> python smallfile_cli.py --operation create --threads 1 --file-size 30
>> --files 5000 --files-per-dir 10000 --top /mnt/nfs/test1
>> smallfile version 3.0
>> hosts in test : None
>> top test directory(s) : ['/mnt/nfs/test1']
>> operation : create
>> files/thread : 5000
>> threads : 1
>> record size (KB, 0 = maximum) : 0
>> file size (KB) : 30
>> file size distribution : fixed
>> files per dir : 10000
>> dirs per dir : 10
>> threads share directories? : N
>> filename prefix :
>> filename suffix :
>> hash file number into dir.? : N
>> fsync after modify? : N
>> pause between files (microsec) : 0
>> finish all requests? : Y
>> stonewall? : Y
>> measure response times? : N
>> verify read? : Y
>> verbose? : False
>> log to stderr? : False
>> ext.attr.size : 0
>> ext.attr.count : 0
>> host = cm2.lab.com,thr = 00,elapsed = 54.737751,files = 5000,records =
>> 5000,status = ok
>> total threads = 1
>> total files = 5000
>> total data = 0.143 GB
>> 100.00% of requested files processed, minimum is 90.00
>> 54.737751 sec elapsed time
>> 91.344637 files/sec
>> 91.344637 IOPS
>> 2.676112 MB/sec
>>
>>
>> * 3th test: new windows 2012R2 with nfs client installed :
>>
>> C:\Users\Administrator\smallfile>smallfile_cli.py --operation create
>> --threads 1 --file-size 30 --files 5000 --files-per-dir 10000 --top
>> \\192.168.47.11\vol1\test1
>> smallfile version 3.0
>> hosts in test : None
>> top test directory(s) :
>> ['\\\\192.168.47.11\\vol1\\test1']
>> operation : create
>> files/thread : 5000
>> threads : 1
>> record size (KB, 0 = maximum) : 0
>> file size (KB) : 30
>> file size distribution : fixed
>> files per dir : 10000
>> dirs per dir : 10
>> threads share directories? : N
>> filename prefix :
>> filename suffix :
>> hash file number into dir.? : N
>> fsync after modify? : N
>> pause between files (microsec) : 0
>> finish all requests? : Y
>> stonewall? : Y
>> measure response times? : N
>> verify read? : Y
>> verbose? : False
>> log to stderr? : False
>> ext.attr.size : 0
>> ext.attr.count : 0
>> adding time for Windows synchronization
>> host = WIN-H8RKTO9B438,thr = 00,elapsed = 425.342000,files =
>> 5000,records = 5000
>> ,status = ok
>> total threads = 1
>> total files = 5000
>> total data = 0.143 GB
>> 100.00% of requested files processed, minimum is 90.00
>> 425.342000 sec elapsed time
>> 11.755246 files/sec
>> 11.755246 IOPS
>> 0.344392 MB/sec
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
2017-05-09 14:12 GMT+02:00 ChiKu <chikulinu at gmail.com>:
> Hello,
>
> I'm testing glusterfs for windows client.
> I created 2 servers for glusterfs (3.10.1 replication 2) on centos 7.3.
>
> Right now, I just use default setting and my testing use case is alot
> small files in a folder.
>
> nfs windows client is so poor performance than nfs linux client. Idon't
> understand. It should have same nfs linux performance.
> I saw something wierd about network traffic. On windows client I saw more
> receive (9Mbps) traffic than send traffic (1Mpbs).
>
> On nfs linux client, receive traffic is around 700Kbps.
>
> Can someone have any idea what happen with nfs windows client?
> I will try later some tunning tests.
>
>
>
>
> * 1st test: centos client mount with glusterfs type :
> gl1.lab.com:vol1 on /mnt/glusterfs type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>
>
> python smallfile_cli.py --operation create --threads 1 --file-size 30
> --files 5000 --files-per-dir 10000 --top /mnt/glusterfs/test1
> smallfile version 3.0
> hosts in test : None
> top test directory(s) : ['/mnt/glusterfs/test1']
> operation : create
> files/thread : 5000
> threads : 1
> record size (KB, 0 = maximum) : 0
> file size (KB) : 30
> file size distribution : fixed
> files per dir : 10000
> dirs per dir : 10
> threads share directories? : N
> filename prefix :
> filename suffix :
> hash file number into dir.? : N
> fsync after modify? : N
> pause between files (microsec) : 0
> finish all requests? : Y
> stonewall? : Y
> measure response times? : N
> verify read? : Y
> verbose? : False
> log to stderr? : False
> ext.attr.size : 0
> ext.attr.count : 0
> host = cm2.lab.com,thr = 00,elapsed = 16.566169,files = 5000,records =
> 5000,status = ok
> total threads = 1
> total files = 5000
> total data = 0.143 GB
> 100.00% of requested files processed, minimum is 90.00
> 16.566169 sec elapsed time
> 301.819932 files/sec
> 301.819932 IOPS
> 8.842381 MB/sec
>
> * 2nd test centos client mount with nfs :
> gl1.lab.com:/vol1 on /mnt/nfs type nfs (rw,relatime,vers=3,rsize=
> 1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,
> retrans=2,sec=sys,mountaddr=192.168.47.11,mountvers=3,
> mountport=38465,mountproto=tcp,local_lock=none,addr=192.168.47.11)
>
> python smallfile_cli.py --operation create --threads 1 --file-size 30
> --files 5000 --files-per-dir 10000 --top /mnt/nfs/test1
> smallfile version 3.0
> hosts in test : None
> top test directory(s) : ['/mnt/nfs/test1']
> operation : create
> files/thread : 5000
> threads : 1
> record size (KB, 0 = maximum) : 0
> file size (KB) : 30
> file size distribution : fixed
> files per dir : 10000
> dirs per dir : 10
> threads share directories? : N
> filename prefix :
> filename suffix :
> hash file number into dir.? : N
> fsync after modify? : N
> pause between files (microsec) : 0
> finish all requests? : Y
> stonewall? : Y
> measure response times? : N
> verify read? : Y
> verbose? : False
> log to stderr? : False
> ext.attr.size : 0
> ext.attr.count : 0
> host = cm2.lab.com,thr = 00,elapsed = 54.737751,files = 5000,records =
> 5000,status = ok
> total threads = 1
> total files = 5000
> total data = 0.143 GB
> 100.00% of requested files processed, minimum is 90.00
> 54.737751 sec elapsed time
> 91.344637 files/sec
> 91.344637 IOPS
> 2.676112 MB/sec
>
>
> * 3th test: new windows 2012R2 with nfs client installed :
>
> C:\Users\Administrator\smallfile>smallfile_cli.py --operation create
> --threads 1 --file-size 30 --files 5000 --files-per-dir 10000 --top
> \\192.168.47.11\vol1\test1
> smallfile version 3.0
> hosts in test : None
> top test directory(s) : ['\\\\192.168.47.11\\vol1\\test1']
>
> operation : create
> files/thread : 5000
> threads : 1
> record size (KB, 0 = maximum) : 0
> file size (KB) : 30
> file size distribution : fixed
> files per dir : 10000
> dirs per dir : 10
> threads share directories? : N
> filename prefix :
> filename suffix :
> hash file number into dir.? : N
> fsync after modify? : N
> pause between files (microsec) : 0
> finish all requests? : Y
> stonewall? : Y
> measure response times? : N
> verify read? : Y
> verbose? : False
> log to stderr? : False
> ext.attr.size : 0
> ext.attr.count : 0
> adding time for Windows synchronization
> host = WIN-H8RKTO9B438,thr = 00,elapsed = 425.342000,files = 5000,records
> = 5000
> ,status = ok
> total threads = 1
> total files = 5000
> total data = 0.143 GB
> 100.00% of requested files processed, minimum is 90.00
> 425.342000 sec elapsed time
> 11.755246 files/sec
> 11.755246 IOPS
> 0.344392 MB/sec
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170511/c93db985/attachment.html>
More information about the Gluster-users
mailing list