[Gluster-users] Glusterfs 3.1 with Ubuntu Lucid 32bit

Deadpan110 deadpan110 at gmail.com
Wed Oct 20 12:03:40 UTC 2010


I have posted this to the lists purely to help others - please do not
consider any of the following suitable for a production environment
and follow these rough instructions at your own risk.

Feel free to add your own additions to this mail posting as this may
or may not work for everybody!

I will not be held responsible for data loss, excessive CPU or mem
usage etc etc etc...


INSTALL NEEDED COMPONENTS:

# make sure you have completely removed any other Ubuntu glusterfs packages
apt-get remove --purge glusterfs-*

# install the build environment
apt-get install sshfs build-essential flex bison byacc vim wget libreadline-dev


BUILD AND INSTALL GLUSTERFS 3.1

cd /usr/src
wget http://download.gluster.com/pub/gluster/glusterfs/3.1/LATEST/glusterfs-3.1.0.tar.gz
tar -zxvf glusterfs-3.1.0.tar.gz
cd glusterfs-3.1.0
./configure

# If all went well, you should see the following:

GlusterFS configure summary
===========================
FUSE client        : yes
Infiniband verbs   : no
epoll IO multiplex : yes
argp-standalone    : no
fusermount         : no
readline           : yes

# Now continue to do the usual:

make
make install

MY GREY AREA - Getting it up and running
There should now be a file  /etc/glusterfs/glusterd.vol and it should
contain the following:

volume management
    type mgmt/glusterd
    option working-directory /etc/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
end-volume

I am unsure if I copied the file from elsewhere or if it was
installed, so make sure it is there and contains the above info as it
points to the directory that is now used to store all your
configurations.

The next problem I ran into was with the initscript itself, I have a
strong feeling this should be done elsewhere (i.e. within
/etc/defaults/) but this is possibly a dirty and nasty thing todo:

edit /etc/init.d/glusterd with the editor of your choice and add the
following within the variables at the top:

CONFIGFILE='/etc/glusterfs/glusterd.vol'

You should now be able to start the service.


SETTING UP YOUR OTHER NODES

Depending on how exactly you install on other nodes can cause a few
problems, if you simply copy the src directory and install, you will
find yourself with machines all using the same UID.

cat /etc/glusterd/glusterd.info
UUID=5714e9b0-d8db-11df-937b-0800200c9a66

Make sure this is unique on every node, if it is not - then create a new UID
(I simply googled and generated some UID's online - but whatever
method you use - even rolling a dice or playing cards - this NEEDS to
be unique)

You should now be able to start glusterfs on every node.

/etc/init.d/glusterd start
/etc/init.d/glusterd status


NOTES ON THE NEW CLI USAGE

I love the new CLI - it is great, it will create the directories on
all nodes for you - so from here on in, you can use the documentation
to create the storage you require by starting with assigning your
storage pool:

http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Creating_Trusted_Storage_Pools

To simplify, do this from 1 node with glusterd running on all nodes...
do not try to add the node you are on, it is automatically there - and
as long as the UIDs are unique as mentioned above, then you should be
good to go.

As I said above, the CLI is great, so there is no need to manually
mkdir - simply choose your location and do (to create a simple
distributed volume across 3 nodes):

gluster volume create test-volume transport tcp
node1:/my/storage/test1 tcp node2:/my/storage/test2 tcp
node3:/my/storage/test3

*replacing node with IP addresses or hostnames that all the systems
know along with the locations of the underlying bricks will be stored.


NOTES ON MOUNTING

Well, it is all there and ready for use - now to mount it!

If you have already read the documentation, then you know there are 3
ways to access it.

I used 2 methods:

1: glusterfs native (fuse)

http://www.gluster.com/community/documentation/index.php/Gluster_3.1_Native_Client_Guide

This method has better locking capabilities but can cause big
overheads with CPU and mem usage - make sure you have an ample amount
of mem to use or load will cause your node to suffer - if you notice
swap thrashing your disks, then your system is struggling.

I found that my webserver/mailserver + glusterfs native mount on 512MB
was not enough but did run a lot nicer on 1024MB

2: glusterfs NFS

Obviously make sure you have nfs-common and portmap installed and then
mount in the usual way.

I found this method had less mem and CPU overheads but locking seemed
really bad with some of my services (Dovecot, SVN) and the locks
ultimately caused load to spiral out of control.

It may have been a misconfiguration on my behalf!

Simply using NFS mounting as a read filesystem without the need for
locking worked well... but writing large files seemed to lock up the
system also (i did not test this with 1024MB of mem and again, it is
possibly a configuration on my behalf).


CONCLUSION

Well, I am biast - I am not be affiliated in anyway with gluster - but
- 3.1 is awesome!
AGAIN - do not use my above setup on 32bit within a production
environment - but simply have a play around.
I love the new CLI and the work these people are doing - feel free to
post to this thread with any corrections and findings within your own
experiments.

AGAIN - 32bit is unsupported so do not expect help from the devs, they
are busy enough as it is - I have provided this post for the curious
and brave!



More information about the Gluster-users mailing list