[Gluster-devel] GlusterFS High Availability Storage with GlusterFS on Centos5

Guido Smit guido at comlog.nl
Wed Aug 8 12:59:46 UTC 2007


Hi all,

I've just set up my new cluster with glusterfs. I've tried the examples 
on the site on GlusterFS High Availability Storage with GlusterFS, but 
it keeps crashing, so I've moved several things to the client
config and it's running smooth now. My question now is, what can I do to 
increase performance and are there any mistakes I made in my configs?

My servers all run Centos5 with glusterfs-1.3.pre7 and fuse-2.7.0 (from 
sourceforge).

My configs:

glusterfs-server.vol :

volume mailspool-ds
        type storage/posix                              # POSIX FS 
translator
        option directory /home/export/mailspool         # Export this 
directory
end-volume

volume mailspool-ns
        type storage/posix                              # POSIX FS 
translator
        option directory /home/export/mailspool-ns      # Export this 
directory
end-volume

volume mailspool-ds-io              #iothreads can give performance a boost
        type performance/io-threads
        option thread-count 8
        option cache-size 64MB
        subvolumes mailspool-ds
end-volume

volume mailspool-ns-io
        type performance/io-threads
        option thread-count 8
        option cache-size 64MB
        subvolumes mailspool-ns
end-volume

volume server
        type protocol/server
        option transport-type tcp/server
        subvolumes mailspool-ds-io mailspool-ns-io
        option auth.ip.mailspool-ds-io.allow 192.168.1.*,127.0.0.1
        option auth.ip.mailspool-ns-io.allow 192.168.1.*,127.0.0.1
end-volume


glusterfs-client.vol :

##############################################
###  GlusterFS Client Volume Specification  ##
##############################################
volume pop2-ds
        type protocol/client
        option transport-type tcp/client        # for TCP/IP transport
        option remote-host 192.168.1.42         # DNS Round Robin 
pointing towards all 3 GlusterFS servers
        option remote-subvolume mailspool-ds-io # name of the remote volume
end-volume

volume pop2-ns
        type protocol/client
        option transport-type tcp/client        # for TCP/IP transport
        option remote-host 192.168.1.42         # DNS Round Robin 
pointing towards all 3 GlusterFS servers
        option remote-subvolume mailspool-ns-io # name of the remote volume
end-volume

volume smtp1-ds
        type protocol/client
        option transport-type tcp/client
        option remote-host 192.168.1.224
        option remote-subvolume mailspool-ds-io
end-volume

volume smtp1-ns
        type protocol/client
        option transport-type tcp/client
        option remote-host 192.168.1.224
        option remote-subvolume mailspool-ns-io
end-volume

volume mailspool-afr
        type cluster/afr
        subvolumes pop2-ds smtp1-ds
        option replicate *:2
end-volume

volume mailspool-ns-afr
        type cluster/afr
        subvolumes pop2-ns smtp1-ns
        option replicate *:2
end-volume

volume bricks
        type cluster/unify
        subvolumes mailspool-afr
        option namespace mailspool-ns-afr
        option scheduler alu
        option alu.limits.min-free-disk  60GB              # Stop 
creating files when free-space lt 60GB
        option alu.limits.max-open-files 10000
        option alu.order 
disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
        option alu.disk-usage.entry-threshold 2GB          # Units in 
KB, MB and GB are allowed
        option alu.disk-usage.exit-threshold  60MB         # Units in 
KB, MB and GB are allowed
        option alu.open-files-usage.entry-threshold 1024
        option alu.open-files-usage.exit-threshold 32
        option alu.stat-refresh.interval 10sec
end-volume

### Add writeback feature
volume writeback
        type performance/write-behind
        option aggregate-size 131072 # unit in bytes
        subvolumes bricks
end-volume

### Add readahead feature
volume readahead
        type performance/read-ahead
        option page-size 65536     # unit in bytes
        option page-count 16       # cache per file  = (page-count x 
page-size)
        subvolumes writeback
end-volume

-- 
Regards,

Guido Smit







More information about the Gluster-devel mailing list