[Gluster-users] Gluster 3.4 Samba VFS writes slow in Win 7 clients

Lalatendu Mohanty lmohanty at redhat.com
Wed Aug 21 08:53:00 UTC 2013


On 08/21/2013 01:32 PM, kane wrote:
> Hello:
>
> We have used glusterfs3.4 with the lasted samba-glusterfs-vfs lib to 
> test samba performance in windows client.
>
> two glusterfs server nodes export share with name of "gvol":
> hardwares:
> brick use a raid 5 logic disk with 8 * 2T SATA HDDs
> 10G network connection
>
> one linux client mount the "gvol" with cmd:
> [root at localhost current]#  mount.cifs //192.168.100.133/gvol /mnt/vfs 
> -o user=kane,pass=123456
>
> then i use iozone to test the write performance in mount dir "/mnt/vfs":
> [root at localhost current]# ./iozone -s 10G -r 128k -i0 -t 4
> .....
> File size set to 10485760 KB
> Record Size 128 KB
> Command line used: ./iozone -s 10G -r 128k -i0 -t 4
> Output is in Kbytes/sec
> Time Resolution = 0.000001 seconds.
> Processor cache size set to 1024 Kbytes.
> Processor cache line size set to 32 bytes.
> File stride size set to 17 * record size.
> Throughput test with 4 processes
> Each process writes a 10485760 Kbyte file in 128 Kbyte records
>
> Children see throughput for  4 initial writers =  487376.67 KB/sec
> Parent sees throughput for  4 initial writers =  486184.67 KB/sec
> Min throughput per process =  121699.91 KB/sec
> Max throughput per process =  122005.73 KB/sec
> Avg throughput per process =  121844.17 KB/sec
> Min xfer = 10459520.00 KB
>
> Children see throughput for  4 rewriters =  491416.41 KB/sec
> Parent sees throughput for  4 rewriters =  490298.11 KB/sec
> Min throughput per process =  122808.87 KB/sec
> Max throughput per process =  122937.74 KB/sec
> Avg throughput per process =  122854.10 KB/sec
> Min xfer = 10474880.00 KB
>
> linux client mount with cifs , write performance reach 480MB/s per client;
>
> but when i use win7 client mount the "gvol" with cmd:
> net use Z: \\192.168.100.133\gvol <smb://192.168.100.133/gvol> 123456 
> /user:kane
>
> then also use iozone test in dir Z, even with write block 1Mbyte :
>         File size set to 10485760 KB
>         Record Size 1024 KB
>         Command line used: iozone -s 10G -r 1m -i0 -t 4
>         Output is in Kbytes/sec
>         Time Resolution = -0.000000 seconds.
>         Processor cache size set to 1024 Kbytes.
>         Processor cache line size set to 32 bytes.
>         File stride size set to 17 * record size.
>         Throughput test with 4 processes
>         Each process writes a 10485760 Kbyte file in 1024 Kbyte records
>
>         Children see throughput for  4 initial writers  =  148164.82 
> KB/sec
>         Parent sees throughput for  4 initial writers   =  148015.48 
> KB/sec
>         Min throughput per process                    =   37039.91 KB/sec
>         Max throughput per process                    =   37044.45 KB/sec
>         Avg throughput per process                    =   37041.21 KB/sec
>         Min xfer                    = 10484736.00 KB
>
>         Children see throughput for  4 rewriters        =  147642.12 
> KB/sec
>         Parent sees throughput for  4 rewriters         =  147472.16 
> KB/sec
>         Min throughput per process                    =   36909.13 KB/sec
>         Max throughput per process                    =   36913.29 KB/sec
>         Avg throughput per process                    =   36910.53 KB/sec
>         Min xfer                    = 10484736.00 KB
>
> iozone test complete.
>
> then reach 140MB/s
>
> so , anyone meet with this problem.Is there win7 clinet to reconfigure 
> to perform  well?
>
> Thanks!
>
> kane
> ----------------------------------------------------------------
> Email: kai.zhou at soulinfo.com <mailto:kai.zhou at soulinfo.com>
> ??:    0510-85385788-616
>


Hi kane,

I do run IOs using win7 client with glusterfs3.4 , but I never compared 
the performance with Linux cifs mount. I don't think we need to do any 
special configuration on Windows side. I hope your Linux and Windows 
client have similar configuration i.e. RAM, cache, disk type etc.  
However I am curious to know if your setup uses the vfs plug-in 
correctly. We can confirm that looking at smb.conf entry for the gluster 
volume which should have been created by "gluster start command" 
automatically  .

e.g: entry in smb.conf for one of volume "smbvol" of mine looks like below

[gluster-smbvol]
comment = For samba share of volume smbvol
vfs objects = glusterfs
glusterfs:volume = smbvol
path = /
read only = no
guest ok = yes

Kindly copy the entries in smb.conf  for your gluster volume in this email.
-Lala
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130821/9c0c80bc/attachment.html>


More information about the Gluster-users mailing list