[Gluster-devel] Best performance glusterfs v. 3.0.0 and samba

Harshavardhana harsha at gluster.com
Mon Feb 15 17:45:32 UTC 2010


Hi Roland,

* replies inline *
On Mon, Feb 15, 2010 at 10:18 PM, Roland Fischer
<roland.fischer at xidras.com>wrote:

> Hello,
>
> i have some troubles with glusterfs v.3.0.0 and samba.
> Have anybody experience with gfs and samba? Maybe my config files oder
> tuning options are bad?
>
> we use xen 3.4.1 and glusterfs3.0.0 and client-side-replication.
>
> One domU which is running on glusterfs should release another gfs-lun via
> samba. But the performance is bad.
>
> servervolfile:
> cat export-web-data-client_repl.vol
> # export-web-data-client_repl
> # gfs-01-01 /GFS/web-data
> # gfs-01-02 /GFS/web-data
>
> volume posix
>  type storage/posix
>  option directory /GFS/web-data
> end-volume
>
> volume locks
>  type features/locks
>  subvolumes posix
> end-volume
>
> volume writebehind
>  type performance/write-behind
>  option cache-size 4MB
>  option flush-behind on
>  subvolumes locks
> end-volume
>
> volume web-data
>  type performance/io-threads
>  option thread-count 32
>  subvolumes writebehind
> end-volume
>
> May we know this reason of io-threads over write-behind have you seen any
benefits in using this way. If you are not sure i would suggest moving
writebehind over io-threads.

can you use volume files generated using volgen in case you are not sure on
which way to stack the translators up?.

volume server
>  type protocol/server
>  option transport-type tcp
>  option transport.socket.listen-port 7000
>  option auth.addr.web-data.allow *
>  subvolumes web-data
> end-volume
>
> clientvolfile:
> cat /etc/glusterfs/mount-web-data-client_repl.vol
> volume gfs-01-01
>  type protocol/client
>  option transport-type tcp
>  option remote-host gfs-01-01
>  option remote-port 7000
>  option ping-timeout 5
>  option remote-subvolume web-data
> end-volume
>
> volume gfs-01-02
>  type protocol/client
>  option transport-type tcp
>  option remote-host gfs-01-02
>  option remote-port 7000
>  option ping-timeout 5
>  option remote-subvolume web-data
> end-volume
>
> volume web-data-replicate
>    type cluster/replicate
>    subvolumes gfs-01-01 gfs-01-02
> end-volume
>
> volume readahead
>  type performance/read-ahead
>  option page-count 16              # cache per file  = (page-count x
> page-size)
>  subvolumes web-data-replicate
> end-volume
>
> what is the client side and server side TOTAL ram ?. How many servers and
clients do you have?.  Coz having read-ahead count on 16 is no good for an
ethernet  link, you might be choking up the bandwidth unnecessarily.


> volume writebehind
>  type performance/write-behind
>  option cache-size 2048KB
>  option flush-behind on
>  subvolumes readahead
> end-volume
>
> volume iocache
>  type performance/io-cache
>  option cache-size 256MB  #1GB supported
>  option cache-timeout 1
>  subvolumes writebehind
> end-volume
>
> Are your all datasets worth only 256MB?

> volume quickread
>    type performance/quick-read
>    option cache-timeout 1
>    option max-file-size 64kB
>    subvolumes iocache
> end-volume
>
> volume statprefetch
>    type performance/stat-prefetch
>    subvolumes quickread
> end-volume
>
>
> Thank you very much
> regards,
> Roland
>
> Even with this we would need to know the backend disk performance with
o-direct to properly analyse the cost of using buffering on server side to
get better performance out of the system.

Also have you tried below options in your smb.conf?

socket options = TCP_NODELAY IPTOS_LOWDELAY SO_SNDBUF=131072
SO_RCVBUF=131072
max xmit = 131072
getwd cache = yes
use sendfile=yes

RCVBUF and SNDBUF could be changed depending on your needs.

Thanks
--
Harshavardhana

>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20100215/2edf88c5/attachment-0003.html>


More information about the Gluster-devel mailing list