[Gluster-devel] Review comments needed

Jacques Mattheij j at ww.com
Fri Sep 7 23:39:40 UTC 2007


Hello Amar,

I just ran through the complete guide on a clean install of knoppix
5.1 on vmware instances.

Here are my notes (feel free to cut, paste, edit and toss):

(some of these already discussed in the chat, and forgive me for
being a nitpick, I've also included spelling errors)

Downloading knoppix and creating the vmware images took a lot
more time than it took to get glusterfs up and running :)

I have just done the first example.

as requested the time breakdown:

-    8 minutes - knoppix download (KNOPPIX_V5.1.1CD-2007-01-04-EN.iso)
-   30 minutes - creating vmware images for the first example
- 2:30 minutes - fetching kernel sources and compiling the kernel
                  (under vmware) to be able to compile and install
                  the fuse kernel module

The rest of the note taking took more time than fixing and installing
things so no more times from here (but in total the elapsed time for
the whole exercise was about 4 hours).

- the 'note' to run make install you have to be root needs to go
   to the top, in fact you need to be root for all of these operations
   (on most unixes anyway)

- I'd change the 'NFS like' comment to 'single server, single client'

- links to files complete paths, change them when the files change
   (and also change the 'tar' and 'cd' commands below when they do)

- 'in the client machine' should be 'on the client machine'

- I think the 'NOTE: You may choose not to install patched fuse. Its 
perfectly ok.' should be replaced with

   check the version of your libfuse, if it is lower than 2.7.0 then
   install fuse, otherwise go to step (stepnumber)

   There are too many issues with the unpatched version of fuse to let
   people follow a 'basic tutorial' and then have them all calling
   for help because they get weird errors.

   this should go above the installation of the patched version of fuse

   you can check the version of fuse on yourmachine with

   locate libfuse.so

   On my machine the version was 2.6.1, so I proceeded to install fuse,
   the locate command above showed fuse was living in /usr/lib, so
   the prefix to use is /usr

- this needs the kernel source, or at least the headers to be installed

check if your kernel sources are installed:

uname -a

will tell you the version of your kernel

now have a look in /usr/src/linux-VERSIONNR

if that directory exists and has lots of files in it chances
are that your kernel is there and has been compiled from source,
if not you are out of luck and need to install the kernel
source code:

installing the kernel sources

cd /usr/src
wget ftp.kernel.org/pub/linux/kernel/v2.6/linux-2.6.19.tar.bz2
cd linux-2.6.19
make menuconfig
'esc' key
save the configuration
make
make install
make modules_install

(change for your kernel version)

verrry annoying all this, just to get fuse to compile !

You now have the kernel sources installed and a fresh
kernel compiled. Reboot your machine to start using the
new kernel... (sorry... it's just like windows isn't it :) )


- cd back to wherever you've placed the glusterfs/fuse code



- now you're (finally in my case) ready to 'make install' the patched fuse

- ls -l /usr/lib/libfu* verifies that indeed the 2.7.0 version is now
   the current one


- add a 'ldconfig' after the make install of fuse to force linking
   against the new fuse library when starting glusterfs the first time
   (otherwise the old .so file would be used if you are getting a 
dynamically
   linked glusterfs binary) (you've already fixed this)


- apt-get install bison
- apt-get install flex

for some funny reason the bison install decided to remove the
g++-4.1 package, reinstalled it using

- apt-get install g++-4.1

- the 'note: you can change the ip address' should read 'note: you must
   change', it's not optional

- mkdir /tmp/export is missing

- storage brick configuration is missing option transport-type tcp/server

- when running the glusterfs -s 192.168.0.1 /mnt/glusterfs
   command I get the 'could not open specfile' error that
   was already on the mailing list today

- but when starting glusterfs directly from the command line with

   glusterfs -f /etc/glusterfs/glusterfs-client.vol /mnt/glusterfs

   it works !!

- I'd change the 'clustered filesystem' to 'three servers, single client 
using 'unify''


I'll leave the second example for now, it's getting late !

best regards,

   Jacques Mattheij

Amar S. Tumballi wrote:
> Hi all,
>  After seeing lot of basic questions about setting up basic glusterfs mount,
> I thought there is a need for solid guide which gives idea about writing a
> basic spec file. Outcome is the current document. Please feel free to review
> it. Do suggest some improvements.
> 
> http://gluster.org/docs/index.php/Install_and_run_GlusterFS_in_10mins
> 
> Thanks and Regards,
> Amar
> 
> 
> 
> ------------------------------------------------------------------------
> 
> 
> Download:
> ----------
> 
> Get GlusterFS: http://ftp.zresearch.com/pub/gluster/glusterfs/CURRENT
> 
> Get Patched Fuse: http://ftp.zresearch.com/pub/gluster/glusterfs/fuse/
>  - This patched version of fuse is very suited for using with GlusterFS, as:
>    * this supports higher IO buffer size, which gives increased IO performance.
>    * it provides flock() syscall, which is not present in regular fuse tarball.
>    * inode management improvements.
> 
> Install:
> ----------
> 
> This document describes the install procedure from source tarball.
> 
> In the client machine, install the fuse package. (Make sure that '--prefix' option is set to earlier fuse installation path).
> 
> ====
>  # tar -xzf fuse-2.7.0-glfs3.tar.gz
>  # cd fuse-2.7.0-glfs3
>  # ./configure --prefix=/usr --enable-kernel-module
>  # make install > /dev/null
>  # ldconfig 
> ====
> 
> Now make sure that the system has following packages:
>  * fuse [Just got installed] 
>  * flex
>  * bison
> 
> Now, untar and install the glusterfs package.
> 
> ====
>  # tar -xzf glusterfs-1.3.1.tar.gz
>  # cd glusterfs-1.3.1/
>  # ./configure --prefix= --disable-ibverbs
>  # make install > /dev/null
> ====
> 
> Congratulations :) You are done with 'glusterfs' installation.
> 
> 
> Execution:
> ----------
> After the installations a majority of problem faced by many people is that, how to get glusterfs working??. To run GlusterFS, you need a volume specification file in short a spec file, which defines the behavior and features for glusterfs. We will start with a barebone spec file (this spec file is very basic and it is just to get the feel of GlusterFS).
> 
> 
> ** GlusterFS is Distributed Parallel Filesystem (confused?? think it like an N
> FS replacement for the time being): 
> 
> **** Example1 [NFS like]
>  Assume you have two machines '192.168.0.1' and '192.168.0.2'. Let 192.168.0.1 be server and the latter be client. [NOTE: you can change the IP address in the spec file according to your network configuration or else to test, you can use the systems localhost IP address i.e '127.0.0.1']
> 
>  -> Server machine: [192.168.0.1]
>    [NOTE: After editing the file it sould have content as shown by cat command]
> 
> ====
>  $ emacs /etc/glusterfs/glusterfs-server.vol
>  $ cat /etc/glusterfs/glusterfs-server.vol
>  volume brick
>    type storage/posix
>    option directory /tmp/export
>  end-volume
> 
>  volume server
>    type protocol/server
>    subvolumes brick
>    option auth.ip.brick.allow *
>  end-volume
>  
>  $ glusterfsd -f /etc/glusterfs/glusterfs-server.vol
> ====
> 
>  -> Client machine: [192.168.0.2]
> 
> ====
>  $ mkdir /mnt/glusterfs
>  $ emacs /etc/glusterfs/glusterfs-client.vol
>  $ cat /etc/glusterfs/glusterfs-client.vol
>  
>  volume client
>    type protocol/client
>    option remote-host 192.168.0.1
>    option remote-subvolume brick
>  end-volume
>  
>  $ glusterfs -f /etc/glusterfs/glusterfs-client.vol /mnt/glusterfs
> ====
> 
> Wow, you can see the exported directory '192.168.0.1:/tmp/export' as /mnt/glusterfs in client node :O
> 
> 
> **** Example 2 [Clustered FileSystem]
> 
>  Assume you have 4 machines '192.168.0.1' to '192.168.0.4'. Let 3 machines 192.168.0.1, 192.168.0.2 and 192.168.0.3 be servers and other be client. [NOTE: you can change the IP address in the spec file according to your network, or else, to test on the same machine give '127.0.0.1' and different listen ports]
> 
>  --> Server1 [192.168.0.1]
> 
> ====
>  $ emacs /etc/glusterfs/glusterfs-server.vol
>  $ cat /etc/glusterfs/glusterfs-server.vol
>  volume brick
>    type storage/posix
>    option directory /tmp/export
>  end-volume
> 
>  volume brick-ns
>    type storage/posix
>    option directory /tmp/export-ns
>  end-volume
> 
>  volume server
>    type protocol/server
>    subvolumes brick
>    option auth.ip.brick.allow *
>    option auth.ip.brick-ns.allow *
>  end-volume
>  
>  $ glusterfsd -f /etc/glusterfs/glusterfs-server.vol
>  $ df -h
>  Filesystem            Size  Used Avail Use% Mounted on
>  /dev/sda1             9.2G  7.6G  1.2G  87% /
>  $
> 
> ====
> 
>   --> Server2 [192.168.0.2]
> 
> ====
>  $ emacs /etc/glusterfs/glusterfs-server.vol
>  $ cat /etc/glusterfs/glusterfs-server.vol
>  volume brick
>    type storage/posix
>    option directory /tmp/export
>  end-volume
> 
>  volume server
>    type protocol/server
>    subvolumes brick
>    option auth.ip.brick.allow *
>  end-volume
>  
>  $ glusterfsd -f /etc/glusterfs/glusterfs-server.vol
>  $ df -h
>  Filesystem            Size  Used Avail Use% Mounted on
>  /dev/sda1             9.2G  7.6G  1.2G  87% /
>  $
> ====
> 
>  --> Server3 [192.168.0.3]
> 
> ====
>  $ emacs /etc/glusterfs/glusterfs-server.vol
>  $ cat /etc/glusterfs/glusterfs-server.vol
>  volume brick
>    type storage/posix
>    option directory /tmp/export
>  end-volume
> 
>  volume server
>    type protocol/server
>    subvolumes brick
>    option auth.ip.brick.allow *
>  end-volume
>  
>  $ glusterfsd -f /etc/glusterfs/glusterfs-server.vol
>  $ df -h
>  Filesystem            Size  Used Avail Use% Mounted on
>  /dev/sda1             9.2G  7.6G  1.2G  87% /
>  $
> ====
> 
>  --> Client1 [192.168.0.4]
> 
> 
> ====
>  $ mkdir /mnt/glusterfs
>  $ emacs /etc/glusterfs/glusterfs-client.vol
>  $ cat /etc/glusterfs/glusterfs-client.vol
> 
>  
>  volume client1-ns
>    type protocol/client
>    option remote-host 192.168.0.1
>    option remote-subvolume brick-ns
>  end-volume
>  
>  volume client1
>    type protocol/client
>    option remote-host 192.168.0.1
>    option remote-subvolume brick
>  end-volume
>  
>  volume client2
>    type protocol/client
>    option remote-host 192.168.0.2
>    option remote-subvolume brick
>  end-volume
>  
>  volume client3
>    type protocol/client
>    option remote-host 192.168.0.3
>    option remote-subvolume brick
>  end-volume
>  
>  volume unify
>    type cluster/unify
>    subvolumes client1 client2 client3
>    option namespace client1-ns
>    option scheduler rr
>  end-volume
> 
>  $ glusterfs -f /etc/glusterfs/glusterfs-client.vol /mnt/glusterfs
>  $ df -h
>  Filesystem            Size   Used Avail Use% Mounted on
>  /dev/sda1             9.2G   7.6G  1.2G  87% /
>  glusterfs            27.7G  22.9G  3.7G  87% /mnt/glusterfs
>  $
> ====
> 
>  :O You already got your cluster file system working
> 
> 
> For more details refer Gluster wiki - http://www.gluster.org/docs/index.php/GlusterFS
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel

-- 
/-------------------------------------------------------------------------\
| Jacques Mattheij, j at ww.com, ww.com, livelog.com and greenbits.com       |
|                                                                         |
| IMPORTANT:                                                              |
| When you send me mail from an address that is unknown to me make sure   |
| the current password ('stjoes') is present anywhere in the email,       |
| otherwise it will not get through!                                      |
\-------------------------------------------------------------------------/





More information about the Gluster-devel mailing list