[Gluster-devel] stability and configurations

Hans Einar Gautun einar.gautun at statkart.no
Thu Oct 4 11:56:56 UTC 2007


Wed, 03,.10.2007 kl. 14.35 +0200, skrev Jacques Mattheij:
> Hello there gluster developers and users,
> 
> I'm trying to get a handle on what it takes to get glusterfs to
> work reliable. After several weeks of testing we have to date
> not been able to get it to work stable in our setup, and I'm
> beginning to wonder if there is a possible statistical
> approach to finding out what works and what doesn't rather
> than to try to go about it one bug at a time.
> 

A good start is to have a clean config and get it working. With such a
setup you can check the network and other things outside of glusterfs. 

An issu here about network:
When doing ethernet bonding - use channeling on the switch. When doing
"balance-alb" or mode 6 you can get this problem:

The client starts a transfer. The server is beasy on the receiving nic,
so it tells the client to use another nic in the channel. The client
thinks the connection was refused, and the transfer never start. This is
prevented when the channel is terminated on the switch.

When the clean config is working you can add readahead, writebeind an so
on. One at a time of course, so you can pinpoint the problems. See
below...



> This might help to compile a checklist of elements that
> might help to create a 'must have' set of conditions
> in order to be able to run glusterfs stable in a production
> environment.
> 

A must have I would say is:
fuse as a module in the kernel, and patched from fuse-glfs

> It will also help to give me a feeling if not having
> glusterfs working right 'out of the box' is the rule
> rather than the exception.
> 

The box is to small (there is way to many setup possibilities in
glusterfs), so you have to work with setup :)

> For starters here is my setup:
> 
> 5 node cluster, dual opterons, 8G ram per box, supermicro
> chassis, 200G sata drives. 100 Mb link to the net, GigE
> backchannel between the nodes.
> 
> The machines run Debian 'etch' 64 bits linux, kernel version
> is 2.6.17.11. Fuse has been upgraded to the glfs4 patch.
> 
> Glusterfs configuration:
> 
> readahead / writebehind / unify

I have disabled the writebehind because of this:

You cp a bunch of files into glusterfs disk. The directories and files
are created, but filesize is 0 until finished. Meanwhile you do a ls -l
to check that the files are ok, but you se the zero filesize. If this is
done in a script, which is acting very fast, the script maybe will
terminate. After failing you will check for yourself, then the files are
filled and ok - and you don't understand what stopped the script.

I have seen this in both single directory and unify.

My setup:

Servers:
2 nodes with ubuntu 6.06 in a client side unify, amd athlon and p3 - 32
bit, 1GB ram each. Result data from/for computation. Total of 2.8TB
3ware ata controller, 300GB disks in raid5. Bonding ethernet, 2 GB NIC's
each. Kernel stock 2.6.15-23-server.

1 node with Debian etch amd64, 4GB ram. Single directory export (like
nfs) /home and binary programs for computation. Total of 1.5TB
3ware sata controller, 400GB disks in raid5. Bonding ethernet 2 GB
NIC's. Kernel vanilla 2.6.21.6.
Local client as well.

Clients:
5 nodes for computation, all Debian etch.
2 amd64, bonding ethernet 2 GB NIC's each. Kernel vanilla 2.6.21.6
3 Xeon 32bit, bonding ethernet 3 GB NIC's each. Kernel stock
2.6.18-5-686-bigmem.

fuse-glfs4, glusterfs latest tla, 1.3.3.

Works like a charm, mixing 32 / 64 bit, and old ubuntu / new debian (gcc
4.0.3 / 4.1.2).

Best regards

Einar Gautun
Norwegian mapping authority





More information about the Gluster-devel mailing list