[Gluster-users] Gluster 3.4 Samba VFS writes slow in Win 7 clients
kane
stef_9k at 163.com
Thu Aug 22 08:06:03 UTC 2013
Hi Raghavendra Talur,
1. My samba version is:
[root at localhost ~]# smbd -V
Version 3.6.9-151.el6
2. Sorry in the first mail list, I forgot to tell, when use win7 client mount the server raw xfs backend with a raid5 disk,
it shows good write performance with same smb.conf in samba vfs glusterfs 3.4 test show next in point 3:
$ ./iozone -s 10G -r 128k -i0 -t 4
----------------
Run began: Thu Aug 22 15:59:11 2013
File size set to 10485760 KB
Record Size 1024 KB
Command line used: iozone -s 10G -r 1m -i0 -t 4
Output is in Kbytes/sec
Time Resolution = -0.000000 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 4 processes
Each process writes a 10485760 Kbyte file in 1024 Kbyte records
Children see throughput for 4 initial writers = 566996.86 KB/sec
Parent sees throughput for 4 initial writers = 566831.18 KB/sec
Min throughput per process = 141741.52 KB/sec
Max throughput per process = 141764.00 KB/sec
Avg throughput per process = 141749.21 KB/sec
Min xfer = 10482688.00 KB
Children see throughput for 4 rewriters = 432868.28 KB/sec
Parent sees throughput for 4 rewriters = 420648.01 KB/sec
Min throughput per process = 108115.68 KB/sec
Max throughput per process = 108383.86 KB/sec
Avg throughput per process = 108217.07 KB/sec
Min xfer = 10460160.00 KB
iozone test complete.
----------------
3. With your recommended conf added in smb.conf, this is testparm result:
[root at localhost ~]# testparm
Load smb config files from /etc/samba/smb.conf
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Processing section "[homes]"
Processing section "[printers]"
Processing section "[cifs]"
Processing section "[raw]"
Processing section "[gvol]"
Loaded services file OK.
Server role: ROLE_STANDALONE
Press enter to see a dump of your service definitions
[global]
workgroup = MYGROUP
server string = DCS Samba Server
log file = /var/log/samba/log.vfs
max log size = 500000
max protocol = SMB2
max xmit = 262144
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144 SO_SNDBUF=262144
stat cache = No
kernel oplocks = No
idmap config * : backend = tdb
aio read size = 262144
aio write size = 262144
aio write behind = true
write cache size = 268435456
cups options = raw
……
[cifs]
path = /mnt/fuse
read only = No
guest ok = Yes
[raw]
path = /dcsdata/d0
read only = No
guest ok = Yes
[gvol]
comment = For samba export of volume test
path = /
read only = No
guest ok = Yes
vfs objects = glusterfs
glusterfs:volume = soul
glusterfs:volfile_server = localhost
the iozone test result with cmd: iozone -s 10G -r 1m -i0 -t 4
-------------
Run began: Thu Aug 22 15:47:31 2013
File size set to 10485760 KB
Record Size 1024 KB
Command line used: iozone -s 10G -r 1m -i0 -t 4
Output is in Kbytes/sec
Time Resolution = -0.000000 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 4 processes
Each process writes a 10485760 Kbyte file in 1024 Kbyte records
Children see throughput for 4 initial writers = 135588.82 KB/sec
Parent sees throughput for 4 initial writers = 135549.95 KB/sec
Min throughput per process = 33895.92 KB/sec
Max throughput per process = 33900.02 KB/sec
Avg throughput per process = 33897.20 KB/sec
Min xfer = 10484736.00 KB
Children see throughput for 4 rewriters = 397494.38 KB/sec
Parent sees throughput for 4 rewriters = 387431.63 KB/sec
Min throughput per process = 99280.98 KB/sec
Max throughput per process = 99538.40 KB/sec
Avg throughput per process = 99373.59 KB/sec
Min xfer = 10459136.00 KB
--------------
在 2013-8-22,下午3:31,RAGHAVENDRA TALUR <raghavendra.talur at gmail.com> 写道:
> Hi Kane,
>
> 1. Which version of samba are you running?
>
> 2. Can you re-run the test after adding the following lines to smb.conf's global section and tell if it helps?
> kernel oplocks = no
> stat cache = no
>
> Thanks,
> Raghavendra Talur
>
>
> On Wed, Aug 21, 2013 at 3:48 PM, kane <stef_9k at 163.com> wrote:
> Hi Lala, thank you for reply this issue.
>
> this is our smb.conf:
> --------
> [global]
> workgroup = MYGROUP
> server string = DCS Samba Server
> log file = /var/log/samba/log.vfs
> max log size = 500000
> # log level = 10
> # max xmit = 65535
> # getwd cache = yes
> # use sendfile = yes
> # strict sync = no
> # sync always = no
> # large readwrite = yes
> aio read size = 262144
> aio write size = 262144
> aio write behind = true
> # min receivefile size = 262144
> write cache size = 268435456
> # oplocks = yes
> security = user
> passdb backend = tdbsam
> load printers = yes
> cups options = raw
> read raw = yes
> write raw = yes
> max xmit = 262144
> read size = 262144
> socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144 SO_SNDBUF=262144
> max protocol = SMB2
>
> [homes]
> comment = Home Directories
> browseable = no
> writable = yes
>
>
> [printers]
> comment = All Printers
> path = /var/spool/samba
> browseable = no
> guest ok = no
> writable = no
> printable = yes
>
> [cifs]
> path = /mnt/fuse
> guest ok = yes
> writable = yes
>
> [raw]
> path = /dcsdata/d0
> guest ok = yes
> writable = yes
>
> [gvol]
> comment = For samba export of volume test
> vfs objects = glusterfs
> glusterfs:volfile_server = localhost
> glusterfs:volume = soul
> path = /
> read only = no
> guest ok = yes
> --------
>
> our win 7 client hardware:
> Intel® Xeon® E31230 @ 3.20GHz
> 8GB RAM
>
> linux client hardware:
> Intel(R) Xeon(R) CPU X3430 @ 2.40GHz
> 16GB RAM
>
> pretty thanks
>
> -kane
>
> 在 2013-8-21,下午4:53,Lalatendu Mohanty <lmohanty at redhat.com> 写道:
>
>> On 08/21/2013 01:32 PM, kane wrote:
>>> Hello:
>>>
>>> We have used glusterfs3.4 with the lasted samba-glusterfs-vfs lib to test samba performance in windows client.
>>>
>>> two glusterfs server nodes export share with name of "gvol":
>>> hardwares:
>>> brick use a raid 5 logic disk with 8 * 2T SATA HDDs
>>> 10G network connection
>>>
>>> one linux client mount the "gvol" with cmd:
>>> [root at localhost current]# mount.cifs //192.168.100.133/gvol /mnt/vfs -o user=kane,pass=123456
>>>
>>> then i use iozone to test the write performance in mount dir "/mnt/vfs":
>>> [root at localhost current]# ./iozone -s 10G -r 128k -i0 -t 4
>>> …..
>>> File size set to 10485760 KB
>>> Record Size 128 KB
>>> Command line used: ./iozone -s 10G -r 128k -i0 -t 4
>>> Output is in Kbytes/sec
>>> Time Resolution = 0.000001 seconds.
>>> Processor cache size set to 1024 Kbytes.
>>> Processor cache line size set to 32 bytes.
>>> File stride size set to 17 * record size.
>>> Throughput test with 4 processes
>>> Each process writes a 10485760 Kbyte file in 128 Kbyte records
>>>
>>> Children see throughput for 4 initial writers = 487376.67 KB/sec
>>> Parent sees throughput for 4 initial writers = 486184.67 KB/sec
>>> Min throughput per process = 121699.91 KB/sec
>>> Max throughput per process = 122005.73 KB/sec
>>> Avg throughput per process = 121844.17 KB/sec
>>> Min xfer = 10459520.00 KB
>>>
>>> Children see throughput for 4 rewriters = 491416.41 KB/sec
>>> Parent sees throughput for 4 rewriters = 490298.11 KB/sec
>>> Min throughput per process = 122808.87 KB/sec
>>> Max throughput per process = 122937.74 KB/sec
>>> Avg throughput per process = 122854.10 KB/sec
>>> Min xfer = 10474880.00 KB
>>>
>>> linux client mount with cifs , write performance reach 480MB/s per client;
>>>
>>> but when i use win7 client mount the "gvol" with cmd:
>>> net use Z: \\192.168.100.133\gvol 123456 /user:kane
>>>
>>> then also use iozone test in dir Z, even with write block 1Mbyte :
>>> File size set to 10485760 KB
>>> Record Size 1024 KB
>>> Command line used: iozone -s 10G -r 1m -i0 -t 4
>>> Output is in Kbytes/sec
>>> Time Resolution = -0.000000 seconds.
>>> Processor cache size set to 1024 Kbytes.
>>> Processor cache line size set to 32 bytes.
>>> File stride size set to 17 * record size.
>>> Throughput test with 4 processes
>>> Each process writes a 10485760 Kbyte file in 1024 Kbyte records
>>>
>>> Children see throughput for 4 initial writers = 148164.82 KB/sec
>>> Parent sees throughput for 4 initial writers = 148015.48 KB/sec
>>> Min throughput per process = 37039.91 KB/sec
>>> Max throughput per process = 37044.45 KB/sec
>>> Avg throughput per process = 37041.21 KB/sec
>>> Min xfer = 10484736.00 KB
>>>
>>> Children see throughput for 4 rewriters = 147642.12 KB/sec
>>> Parent sees throughput for 4 rewriters = 147472.16 KB/sec
>>> Min throughput per process = 36909.13 KB/sec
>>> Max throughput per process = 36913.29 KB/sec
>>> Avg throughput per process = 36910.53 KB/sec
>>> Min xfer = 10484736.00 KB
>>>
>>> iozone test complete.
>>>
>>> then reach 140MB/s
>>>
>>> so , anyone meet with this problem.Is there win7 clinet to reconfigure to perform well?
>>>
>>> Thanks!
>>>
>>> kane
>>> ----------------------------------------------------------------
>>> Email: kai.zhou at soulinfo.com
>>> 电话: 0510-85385788-616
>>>
>>
>>
>> Hi kane,
>>
>> I do run IOs using win7 client with glusterfs3.4 , but I never compared the performance with Linux cifs mount. I don't think we need to do any special configuration on Windows side. I hope your Linux and Windows client have similar configuration i.e. RAM, cache, disk type etc. However I am curious to know if your setup uses the vfs plug-in correctly. We can confirm that looking at smb.conf entry for the gluster volume which should have been created by "gluster start command" automatically .
>>
>> e.g: entry in smb.conf for one of volume "smbvol" of mine looks like below
>>
>> [gluster-smbvol]
>> comment = For samba share of volume smbvol
>> vfs objects = glusterfs
>> glusterfs:volume = smbvol
>> path = /
>> read only = no
>> guest ok = yes
>>
>> Kindly copy the entries in smb.conf for your gluster volume in this email.
>> -Lala
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Raghavendra Talur
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130822/e9b7d132/attachment.html>
More information about the Gluster-users
mailing list