[Gluster-devel] exporting glusterfs with samba
Miklos Balazs
mbalazs at gmail.com
Wed May 16 13:50:58 UTC 2007
Hello,
One more thing that I forgot to mention is that I was using version
1.3.0-pre3, and now upgraded to pre4 but it didn't get any better.
And I have tryed to export another directory with samba, which is not
under the gluster tree, and I can write there with 25-30MB/s. Also I
can write with this speed to the gluster share from the samba node
directly.
The network hierarchy is like this:
On node5
eth0 - 192.168.0.40
eth1 - 192.168.2.15
On node6
eth0 - 192.168.0.41
eth1 - 192.168.2.16
The 0.x and 2.x subnets are on physically separate networks, samba
clients connect on 0.x and gluster communication is on 2.x.
Here are my config files:
(I have tryed using the io-threads translator, but it didn't made any
difference either)
node5.vol:
volume brick-5-a
type storage/posix # POSIX FS translator
option directory /mnt/brick-5-a # Export this directory
end-volume
volume brick-6-b
type storage/posix # POSIX FS translator
option directory /mnt/brick-6-b # Export this directory
end-volume
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
subvolumes brick-5-a brick-6-b
option auth.ip.brick-5-a.allow 192.168.* # Allow access to "brick" volume
option auth.ip.brick-6-b.allow 192.168.* # Allow access to "brick" volume
end-volume
--------------------------
node6.vol:
volume brick-6-a
type storage/posix # POSIX FS translator
option directory /mnt/brick-6-a # Export this directory
end-volume
volume brick-5-b
type storage/posix # POSIX FS translator
option directory /mnt/brick-5-b # Export this directory
end-volume
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
subvolumes brick-6-a brick-5-b
option auth.ip.brick-6-a.allow 192.168.* # Allow access to "brick" volume
option auth.ip.brick-5-b.allow 192.168.* # Allow access to "brick" volume
end-volume
----------------------
client.vol:
### file: client-volume.spec.sample
### Add client feature and attach to remote subvolume
volume brick-5-a
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 192.168.2.15 # IP address of the remote brick
option remote-subvolume brick-5-a # name of the remote volume
end-volume
volume brick-5-b
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 192.168.2.16 # IP address of the remote brick
option remote-subvolume brick-5-b # name of the remote volume
end-volume
volume brick-6-a
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 192.168.2.16 # IP address of the remote brick
option remote-subvolume brick-6-a # name of the remote volume
end-volume
volume brick-6-b
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 192.168.2.15 # IP address of the remote brick
option remote-subvolume brick-6-b # name of the remote volume
end-volume
volume afr-5
type cluster/afr
subvolumes brick-5-a brick-5-b
option replicate *:2
end-volume
volume afr-6
type cluster/afr
subvolumes brick-6-a brick-6-b
option replicate *:2
end-volume
volume bricks
type cluster/unify
subvolumes afr-5 afr-6
option scheduler rr
option rr.limits.min-free-disk 1GB
end-volume
volume writebehind
type performance/write-behind
option aggregate-size 131072
subvolumes bricks
end-volume
volume readahead
type performance/read-ahead
option page-size 65536
option page-count 16
subvolumes writebehind
end-volume
-----------------------------------------
smb.conf:
[global]
workgroup = WORKGROUP
server string = Gluster
hosts allow = 192.168.0.
load printers = no
log file = /var/log/samba/%m.log
max log size = 50
security = user
socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
local master = no
preferred master = no
dns proxy = no
idmap uid = 16777216-33554431
idmap gid = 16777216-33554431
template shell = /bin/false
winbind use default domain = no
[Gluster]
path = /gluster
valid users = user
public = yes
writable = yes
browseable = yes
----------------------------
Thanks,
Miklos
More information about the Gluster-devel
mailing list