[Gluster-users] CIFS options - anyone done A-B comparisons on various setups?

Gunnar gluster at akitogo.com
Wed Dec 5 10:12:16 UTC 2012


thanks for posting the conf. I had problems to read and write on the 
Samba volume and I thought
it was config related. I had to change how the volume was mounted, now i 
do (on Centos 6.3 and Gluster 3.3.1):
mount -o vers=3,nolock -t nfs myglusterserver.local:/gv01 /mnt/nfsgv01

My setup is now:
2 servers
1 replicated volume

1. server:
nfs mount of gluster volume, shared by Samba

2. server
gluster fuse mount, shared by Samba

3. Windows, using direct nfs access

For testing I'm doing batch copies of 1000 images files (around 200kb 
average) each batch, 4 batches parallel. I'm doing the tests on a live 
which has some io already, so numbers are an estimation.

The fastest is #3, direct nfs access from Windows (I'm using the Windows 
2003 nfs client, which is not very nice). I'm able to copy 4000 files in 
1:40 min
Second fastest is #1,  nfs mount shared by Samba 4000 files in around 6 min
Slowest is #2  where I need more than 12 min for 4000 files.

I have absolutly no clue how to speed up Samba with a Gluster fuse volume.

It's not exaclty your scenario, but it seems that:
 > using a Gluster client to mount the Gluster storage, with cifs 
providing re-export to remote systems
is the slowest option, at least for smaller files.

I don't think it will make much difference if the nfs volume is mounted 
locally or on a third server. (I never tried ucarp).
If I will find some time I will create another vm and mount the nfs there.

I'm not sure if I can post images to this list but I have made 
screenshots of the network throughput and of the copy times.


Am 05.12.2012 02:02, schrieb Whit Blauvelt:
> Gunnar,
> I claim nothing special in terms of Samba knowledge. Not even that this is
> optimal in any dimension. All I can say is that none of my users have
> complained about performance, in a situation where speed's not critical as
> long as the overall system is dependable. But my current Samba conf, for a
> CIFS share run from a third system exporting an NFS share via CIFS that
> originates on Glusterfs (3.1.5) is:
> [global]
>          workgroup = xyz
>          netbios name = abc
>          interfaces = eth1
>          encrypt passwords = true
>          wins server =
>          create mask = 0666
>          force create mode = 0666
>          directory mask = 0777
>          force directory mode = 0777
>          hosts allow = 192.168.1.
>          load printers = no
>          printing = none
>          printcap name = /dev/null
>          disable spoolss = yes
>          unix extensions = no
> [sharename]
>          path = /path/to/nfsmountof/glusterfs
>          valid users = qwert yuiop
>          writeable = Yes
>          posix-locking = No
> I recall having strong reasons to turn off unix extensions and
> posix-locking, in terms of hitting errors otherwise. I should have kept
> notes though, as that was long enough ago I don't remember the specifics.
> What are you using as a Windows NFS client? I had the impression Windows
> didn't have a good option there.
> Whit
> On Tue, Dec 04, 2012 at 11:06:45PM +0100, Gunnar wrote:
>> Hi Whit,
>> could you post your smb.conf? I'm currently a bit lost with a performance
>> optimized setting for millions of small files accessed via a Samba share
>> (local Gluster fuse mount). I would be glad to try out your approach and
>> see how the results will be since a NFS access from Windows gives me a
>> much better throughput than the Samba share.
>> Thanks,
>> Gunnar

-------------- next part --------------
A non-text attachment was scrubbed...
Name: copy_time.png
Type: image/png
Size: 45458 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121205/6b7612ba/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: network_throughput.png
Type: image/png
Size: 45692 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121205/6b7612ba/attachment-0001.png>

More information about the Gluster-users mailing list