[Gluster-users] Gluster 3.4 Samba VFS writes slow in Win 7 clients

Christopher R. Hertel crh at redhat.com
Thu Aug 22 19:27:27 UTC 2013


[Inline]

----- Original Message -----
> From: "Lalatendu Mohanty" <lmohanty at redhat.com>
> To: "kane" <stef_9k at 163.com>
> Cc: "RAGHAVENDRA TALUR" <raghavendra.talur at gmail.com>, gluster-users at gluster.org
> Sent: Thursday, August 22, 2013 1:12:23 PM
> Subject: Re: [Gluster-users] Gluster 3.4 Samba VFS writes slow in Win 7 clients
> 
> On 08/22/2013 02:14 PM, kane wrote:
> > Hi Raghavendra Talur,
> >
> > 1. I found that use the smb.conf test with iozone, it shows some
> > difference in results:
> > smb.conf:
> > -----------
> > [root at localhost ~]# testparm
> > Load smb config files from /etc/samba/smb.conf
> > rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
> > Processing section "[homes]"
> > Processing section "[printers]"
> > Processing section "[cifs]"
> > Processing section "[raw]"
> > Processing section "[gvol]"
> > Loaded services file OK.
> > Server role: ROLE_STANDALONE
> > Press enter to see a dump of your service definitions
> >
> > [global]
> > workgroup = MYGROUP
> > server string = DCS Samba Server
> > log file = /var/log/samba/log.vfs
> > max log size = 500000
> > max protocol = SMB2
> > min receivefile size = 262144
> > max xmit = 262144
> > socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144
> > SO_SNDBUF=262144
> > idmap config * : backend = tdb
> > aio read size = 262144
> > aio write size = 262144
> > aio write behind = true
> > write cache size = 268435456
> > cups options = raw
> > …….
> >
> > [raw]
> > path = /dcsdata/d0
> > read only = No
> > guest ok = Yes
> >
> > [gvol]
> > comment = For samba export of volume  test
> > path = /
> > read only = No
> > guest ok = Yes
> > vfs objects = glusterfs
> > glusterfs:volume = soul
> > glusterfs:volfile_server = localhost
> > -----------
> > iozone test with cmd :  iozone -s 10G -r 1m -i0 -t 4
> > -----------
> >         Run began: Thu Aug 22 16:11:40 2013
> >
> >         File size set to 10485760 KB
> >         Record Size 1024 KB
> >         Command line used: iozone -s 10G -r 1m -i0 -t 4
> >         Output is in Kbytes/sec
> >         Time Resolution = 0.000000 seconds.
> >         Processor cache size set to 1024 Kbytes.
> >         Processor cache line size set to 32 bytes.
> >         File stride size set to 17 * record size.
> >         Throughput test with 4 processes
> >         Each process writes a 10485760 Kbyte file in 1024 Kbyte records
> >
> >         Children see throughput for  4 initial writers  =  147008.14
> > KB/sec
> >         Parent sees throughput for  4 initial writers   =  146846.43
> > KB/sec
> >         Min throughput per process                      = 36750.59 KB/sec
> >         Max throughput per process                      = 36754.97 KB/sec
> >         Avg throughput per process                      = 36752.04 KB/sec
> >         Min xfer                                        = 10484736.00 KB
> >
> >         Children see throughput for  4 rewriters        =  147494.85
> > KB/sec
> >         Parent sees throughput for  4 rewriters         =  147310.95
> > KB/sec
> >         Min throughput per process                      = 36871.96 KB/sec
> >         Max throughput per process                      = 36877.09 KB/sec
> >         Avg throughput per process                      = 36873.71 KB/sec
> >         Min xfer                                        = 10484736.00 KB
> >
> > iozone test complete.
> > -----------
> >
> > The results of rewrite show some difference, with your recommend
> > smb.conf, the rewite and write diff in iozone docs :
> >
> > Write: This test measures the performance of writing a new file. When
> > a new file is written not only does the data need to be stored but
> > also the overhead information for keeping track of where the data is
> > located on the storage media. This overhead is called the “metadata”
> > It consists of the directory information, the space allocation and any
> > other data associated with a file that is not part of the data
> > contained in the file. It is normal for the initial write performance
> > to be lower than the performance of re- writing a file due to this
> > overhead information.
> >
> > Re-write: This test measures the performance of writing a file that
> > already exists. When a file is written that already exists the work
> > required is less as the metadata already exists. It is normal for the
> > rewrite performance to be higher than the performance of writing a new
> > file.
> >
> > but use iozone test with 4 threads, the rewrite performs much better
> > than write,
> > i thought rewite:180MB/s  vs write:150MB/s is reasonable, but
> > rewrite:400MB/s vs 140MB/s is out of my expectation.
> >
> >
> >
> Kane,
> 
> When we are using same samba share on windows and Linux, the only thing
> different is "unix extensions" support in Linux which Linux uses by
> default.

Well... there may be a few other differences.  You want to see whether
the connections are using SMB2 protocol instead of SMB1, for example.

Basically, though, comparing the Linux kernel client to Windows is sort
of an apples/kumquats comparison.  The client code is very different,
and in SMB the client's performance features have a lot of impact.

> However I don't think "unix extension" has anything to do with
> performance of writes and rewrites.

Agreed.

> I only guess is the samba client in
> Linux is works well with samba than the Microsoft's smb client which is
> in windows 7.

Samba is designed to work well with Windows too.  Windows clients are the
majority of the Samba market, after all.  I do know, however, that the
Linux CIFS client has some performance enhancements that may make a
difference.

> "unix extensions"  parameter controls whether Samba implements the CIFS
> UNIX extensions. These extensions enable Samba to better serve UNIX CIFS
> clients by supporting features such as symbolic links, hard links,
> etc... These extensions require a similarly enabled client, and are of
> no current use to Windows clients.

Right.  The purpose of the Unix Extensions is to preserve POSIX semantics
between the client and the server.  Otherwise, a Linux client has to
translate semantics (things like permissions and timestamps and such) into
Windows format, and then the server has to translate them back.

Unix Extensions currently only work with SMBv1 protocol.

Chris -)-----

> To disable unix extension put following in global section of smb.conf
> and restart smb service//|
> unix extensions|/=|no|/
> 
> > thanks
> > -kane
> >
> > 在 2013-8-22,下午4:06,kane <stef_9k at 163.com
> > <mailto:stef_9k at 163.com>> 写道:
> >
> >> Hi Raghavendra Talur,
> >>
> >> 1. My samba version is:
> >> [root at localhost ~]# smbd -V
> >> Version 3.6.9-151.el6
> >>
> >> 2. Sorry in the first mail list, I forgot to tell, when use win7
> >> client mount the server raw xfs backend with a raid5 disk,
> >> it shows good write performance with same smb.conf in samba
> >> vfs glusterfs 3.4 test show next in point 3:
> >> $ ./iozone -s 10G -r 128k -i0 -t 4
> >> ----------------
> >>         Run began: Thu Aug 22 15:59:11 2013
> >>
> >>         File size set to 10485760 KB
> >>         Record Size 1024 KB
> >>         Command line used: iozone -s 10G -r 1m -i0 -t 4
> >>         Output is in Kbytes/sec
> >>         Time Resolution = -0.000000 seconds.
> >>         Processor cache size set to 1024 Kbytes.
> >>         Processor cache line size set to 32 bytes.
> >>         File stride size set to 17 * record size.
> >>         Throughput test with 4 processes
> >>         Each process writes a 10485760 Kbyte file in 1024 Kbyte records
> >>
> >>         Children see throughput for  4 initial writers  =  566996.86
> >> KB/sec
> >>         Parent sees throughput for  4 initial writers   =  566831.18
> >> KB/sec
> >>         Min throughput per process        =  141741.52 KB/sec
> >>         Max throughput per process        =  141764.00 KB/sec
> >>         Avg throughput per process        =  141749.21 KB/sec
> >>         Min xfer        = 10482688.00 KB
> >>
> >>         Children see throughput for  4 rewriters        =  432868.28
> >> KB/sec
> >>         Parent sees throughput for  4 rewriters       =  420648.01 KB/sec
> >>         Min throughput per process        =  108115.68 KB/sec
> >>         Max throughput per process        =  108383.86 KB/sec
> >>         Avg throughput per process        =  108217.07 KB/sec
> >>         Min xfer        = 10460160.00 KB
> >>
> >>
> >>
> >> iozone test complete.
> >> ----------------
> >>
> >> 3. With your recommended conf added in smb.conf, this is testparm result:
> >> [root at localhost ~]# testparm
> >> Load smb config files from /etc/samba/smb.conf
> >> rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
> >> Processing section "[homes]"
> >> Processing section "[printers]"
> >> Processing section "[cifs]"
> >> Processing section "[raw]"
> >> Processing section "[gvol]"
> >> Loaded services file OK.
> >> Server role: ROLE_STANDALONE
> >> Press enter to see a dump of your service definitions
> >>
> >> [global]
> >> workgroup = MYGROUP
> >> server string = DCS Samba Server
> >> log file = /var/log/samba/log.vfs
> >> max log size = 500000
> >> max protocol = SMB2
> >> max xmit = 262144
> >> socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144
> >> SO_SNDBUF=262144
> >> stat cache = No
> >> kernel oplocks = No
> >> idmap config * : backend = tdb
> >> aio read size = 262144
> >> aio write size = 262144
> >> aio write behind = true
> >> write cache size = 268435456
> >> cups options = raw
> >> ……
> >>
> >> [cifs]
> >> path = /mnt/fuse
> >> read only = No
> >> guest ok = Yes
> >>
> >> [raw]
> >> path = /dcsdata/d0
> >> read only = No
> >> guest ok = Yes
> >>
> >> [gvol]
> >> comment = For samba export of volume  test
> >> path = /
> >> read only = No
> >> guest ok = Yes
> >> vfs objects = glusterfs
> >> glusterfs:volume = soul
> >> glusterfs:volfile_server = localhost
> >>
> >> the iozone test result with cmd: iozone -s 10G -r 1m -i0 -t 4
> >> -------------
> >>         Run began: Thu Aug 22 15:47:31 2013
> >>
> >>         File size set to 10485760 KB
> >>         Record Size 1024 KB
> >>         Command line used: iozone -s 10G -r 1m -i0 -t 4
> >>         Output is in Kbytes/sec
> >>         Time Resolution = -0.000000 seconds.
> >>         Processor cache size set to 1024 Kbytes.
> >>         Processor cache line size set to 32 bytes.
> >>         File stride size set to 17 * record size.
> >>         Throughput test with 4 processes
> >>         Each process writes a 10485760 Kbyte file in 1024 Kbyte records
> >>
> >>         Children see throughput for  4 initial writers  =  135588.82
> >> KB/sec
> >>         Parent sees throughput for  4 initial writers   =  135549.95
> >> KB/sec
> >>         Min throughput per process        =   33895.92 KB/sec
> >>         Max throughput per process        =   33900.02 KB/sec
> >>         Avg throughput per process        =   33897.20 KB/sec
> >>         Min xfer        = 10484736.00 KB
> >>
> >>         Children see throughput for  4 rewriters        =  397494.38
> >> KB/sec
> >>         Parent sees throughput for  4 rewriters       =  387431.63 KB/sec
> >>         Min throughput per process        =   99280.98 KB/sec
> >>         Max throughput per process        =   99538.40 KB/sec
> >>         Avg throughput per process        =   99373.59 KB/sec
> >>         Min xfer        = 10459136.00 KB
> >> --------------
> >>
> >>
> >>
> >>
> >> 在 2013-8-22,下午3:31,RAGHAVENDRA TALUR
> >> <raghavendra.talur at gmail.com <mailto:raghavendra.talur at gmail.com>> 写道:
> >>
> >>> Hi Kane,
> >>>
> >>> 1. Which version of samba are you running?
> >>>
> >>> 2. Can you re-run the test after adding the following lines to
> >>> smb.conf's global section and tell if it helps?
> >>> kernel oplocks = no
> >>> stat cache = no
> >>>
> >>> Thanks,
> >>> Raghavendra Talur
> >>>
> >>>
> >>> On Wed, Aug 21, 2013 at 3:48 PM, kane <stef_9k at 163.com
> >>> <mailto:stef_9k at 163.com>> wrote:
> >>>
> >>>     Hi Lala, thank you for reply this issue.
> >>>
> >>>     this is our smb.conf:
> >>>     --------
> >>>     [global]
> >>>             workgroup = MYGROUP
> >>>             server string = DCS Samba Server
> >>>             log file = /var/log/samba/log.vfs
> >>>             max log size = 500000
> >>>     #       log level = 10
> >>>     #       max xmit = 65535
> >>>     #       getwd cache = yes
> >>>     #       use sendfile = yes
> >>>     #       strict sync = no
> >>>     #       sync always = no
> >>>     #       large readwrite = yes
> >>>             aio read size = 262144
> >>>             aio write size = 262144
> >>>             aio write behind = true
> >>>     #       min receivefile size = 262144
> >>>             write cache size = 268435456
> >>>     #      oplocks = yes
> >>>             security = user
> >>>             passdb backend = tdbsam
> >>>             load printers = yes
> >>>             cups options = raw
> >>>             read raw = yes
> >>>             write raw = yes
> >>>             max xmit = 262144
> >>>             read size = 262144
> >>>             socket options = TCP_NODELAY IPTOS_LOWDELAY
> >>>     SO_RCVBUF=262144 SO_SNDBUF=262144
> >>>             max protocol = SMB2
> >>>
> >>>     [homes]
> >>>             comment = Home Directories
> >>>             browseable = no
> >>>             writable = yes
> >>>
> >>>
> >>>     [printers]
> >>>             comment = All Printers
> >>>             path = /var/spool/samba
> >>>             browseable = no
> >>>             guest ok = no
> >>>             writable = no
> >>>             printable = yes
> >>>
> >>>     [cifs]
> >>>             path = /mnt/fuse
> >>>             guest ok = yes
> >>>             writable = yes
> >>>
> >>>     [raw]
> >>>             path = /dcsdata/d0
> >>>             guest ok = yes
> >>>             writable = yes
> >>>
> >>>     [gvol]
> >>>             comment = For samba export of volume  test
> >>>             vfs objects = glusterfs
> >>>             glusterfs:volfile_server = localhost
> >>>             glusterfs:volume = soul
> >>>             path = /
> >>>             read only = no
> >>>             guest ok = yes
> >>>     --------
> >>>
> >>>     our win 7 client hardware:
> >>>     Intel® Xeon® E31230 @ 3.20GHz
> >>>     8GB RAM
> >>>
> >>>     linux client hardware:
> >>>     Intel(R) Xeon(R) CPU           X3430  @ 2.40GHz
> >>>     16GB RAM
> >>>
> >>>     pretty thanks
> >>>
> >>>     -kane
> >>>
> >>>     在 2013-8-21,下午4:53,Lalatendu Mohanty <lmohanty at redhat.com
> >>>     <mailto:lmohanty at redhat.com>> 写道:
> >>>
> >>>>     On 08/21/2013 01:32 PM, kane wrote:
> >>>>>     Hello:
> >>>>>
> >>>>>     We have used glusterfs3.4 with the lasted samba-glusterfs-vfs
> >>>>>     lib to test samba performance in windows client.
> >>>>>
> >>>>>     two glusterfs server nodes export share with name of "gvol":
> >>>>>     hardwares:
> >>>>>     brick use a raid 5 logic disk with 8 * 2T SATA HDDs
> >>>>>     10G network connection
> >>>>>
> >>>>>     one linux client mount the "gvol" with cmd:
> >>>>>     [root at localhost current]#  mount.cifs //192.168.100.133/gvol
> >>>>>     <http://192.168.100.133/gvol> /mnt/vfs -o user=kane,pass=123456
> >>>>>
> >>>>>     then i use iozone to test the write performance in mount dir
> >>>>>     "/mnt/vfs":
> >>>>>     [root at localhost current]# ./iozone -s 10G -r 128k -i0 -t 4
> >>>>>     …..
> >>>>>     File size set to 10485760 KB
> >>>>>     Record Size 128 KB
> >>>>>     Command line used: ./iozone -s 10G -r 128k -i0 -t 4
> >>>>>     Output is in Kbytes/sec
> >>>>>     Time Resolution = 0.000001 seconds.
> >>>>>     Processor cache size set to 1024 Kbytes.
> >>>>>     Processor cache line size set to 32 bytes.
> >>>>>     File stride size set to 17 * record size.
> >>>>>     Throughput test with 4 processes
> >>>>>     Each process writes a 10485760 Kbyte file in 128 Kbyte records
> >>>>>
> >>>>>     Children see throughput for  4 initial writers =  487376.67 KB/sec
> >>>>>     Parent sees throughput for  4 initial writers =  486184.67 KB/sec
> >>>>>     Min throughput per process =  121699.91 KB/sec
> >>>>>     Max throughput per process =  122005.73 KB/sec
> >>>>>     Avg throughput per process =  121844.17 KB/sec
> >>>>>     Min xfer = 10459520.00 KB
> >>>>>
> >>>>>     Children see throughput for  4 rewriters =  491416.41 KB/sec
> >>>>>     Parent sees throughput for  4 rewriters =  490298.11 KB/sec
> >>>>>     Min throughput per process =  122808.87 KB/sec
> >>>>>     Max throughput per process =  122937.74 KB/sec
> >>>>>     Avg throughput per process =  122854.10 KB/sec
> >>>>>     Min xfer = 10474880.00 KB
> >>>>>
> >>>>>     linux client mount with cifs , write performance reach 480MB/s
> >>>>>     per client;
> >>>>>
> >>>>>     but when i use win7 client mount the "gvol" with cmd:
> >>>>>     net use Z: \\192.168.100.133\gvol 123456 /user:kane
> >>>>>
> >>>>>     then also use iozone test in dir Z, even with write block 1Mbyte :
> >>>>>           File size set to 10485760 KB
> >>>>>           Record Size 1024 KB
> >>>>>           Command line used: iozone -s 10G -r 1m -i0 -t 4
> >>>>>           Output is in Kbytes/sec
> >>>>>           Time Resolution = -0.000000 seconds.
> >>>>>           Processor cache size set to 1024 Kbytes.
> >>>>>           Processor cache line size set to 32 bytes.
> >>>>>           File stride size set to 17 * record size.
> >>>>>           Throughput test with 4 processes
> >>>>>           Each process writes a 10485760 Kbyte file in 1024 Kbyte
> >>>>>     records
> >>>>>
> >>>>>           Children see throughput for  4 initial writers  =
> >>>>>      148164.82 KB/sec
> >>>>>           Parent sees throughput for  4 initial writers   =
> >>>>>      148015.48 KB/sec
> >>>>>           Min throughput per process              = 37039.91 KB/sec
> >>>>>           Max throughput per process              = 37044.45 KB/sec
> >>>>>           Avg throughput per process              = 37041.21 KB/sec
> >>>>>           Min xfer              = 10484736.00 KB
> >>>>>
> >>>>>           Children see throughput for  4 rewriters        =
> >>>>>      147642.12 KB/sec
> >>>>>           Parent sees throughput for  4 rewriters         =
> >>>>>      147472.16 KB/sec
> >>>>>           Min throughput per process              = 36909.13 KB/sec
> >>>>>           Max throughput per process              = 36913.29 KB/sec
> >>>>>           Avg throughput per process              = 36910.53 KB/sec
> >>>>>           Min xfer              = 10484736.00 KB
> >>>>>
> >>>>>     iozone test complete.
> >>>>>
> >>>>>     then reach 140MB/s
> >>>>>
> >>>>>     so , anyone meet with this problem.Is there win7 clinet to
> >>>>>     reconfigure to perform  well?
> >>>>>
> >>>>>     Thanks!
> >>>>>
> >>>>>     kane
> >>>>>     ----------------------------------------------------------------
> >>>>>     Email: kai.zhou at soulinfo.com <mailto:kai.zhou at soulinfo.com>
> >>>>>     电话:  0510-85385788-616
> >>>>>
> >>>>
> >>>>
> >>>>     Hi kane,
> >>>>
> >>>>     I do run IOs using win7 client with glusterfs3.4 , but I never
> >>>>     compared the performance with Linux cifs mount. I don't think
> >>>>     we need to do any special configuration on Windows side. I hope
> >>>>     your Linux and Windows client have similar configuration i.e.
> >>>>     RAM, cache, disk type etc.  However I am curious to know if
> >>>>     your setup uses the  vfs plug-in correctly. We can confirm that
> >>>>     looking at smb.conf entry for the gluster volume which should
> >>>>     have been created by "gluster start command" automatically  .
> >>>>
> >>>>     e.g: entry in smb.conf for one of volume "smbvol" of mine looks
> >>>>     like below
> >>>>
> >>>>     [gluster-smbvol]
> >>>>     comment = For samba share of volume smbvol
> >>>>     vfs objects = glusterfs
> >>>>     glusterfs:volume = smbvol
> >>>>     path = /
> >>>>     read only = no
> >>>>     guest ok = yes
> >>>>
> >>>>     Kindly copy the entries in smb.conf  for your gluster volume in
> >>>>     this email.
> >>>>     -Lala
> >>>>>
> >>>>>
> >>>>>     _______________________________________________
> >>>>>     Gluster-users mailing list
> >>>>>     Gluster-users at gluster.org  <mailto:Gluster-users at gluster.org>
> >>>>>     http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >>>>
> >>>
> >>>
> >>>     _______________________________________________
> >>>     Gluster-users mailing list
> >>>     Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> >>>     http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> *Raghavendra Talur *
> >>>
> >>
> >
> 
> 



More information about the Gluster-users mailing list