[Gluster-devel] Review comments needed
Amar S. Tumballi
amar at zresearch.com
Fri Sep 7 19:22:48 UTC 2007
Thanks Paul,
On 9/8/07, Paul Jochum <jochum at alcatel-lucent.com> wrote:
>
> Hi Amar:
>
> Just 2 small suggestion:
>
> - for the compile lines (on both client and server), since many users
> (especially testing this for the first time) might not have IB, adding
> --disable-ibverbs is needed
>
This was present already in the document.
- and recommending turning off (or modifying) iptables is needed to
> communicate over TCP/IP.
>
Added.
regards,
>
> Paul Jochum
>
Thanks,
Amar
Amar S. Tumballi wrote:
>
> Hi all,
> After seeing lot of basic questions about setting up basic glusterfs mount,
> I thought there is a need for solid guide which gives idea about writing a
> basic spec file. Outcome is the current document. Please feel free to review
> it. Do suggest some improvements.
>
> http://gluster.org/docs/index.php/Install_and_run_GlusterFS_in_10mins
>
> Thanks and Regards,
> Amar
>
> ------------------------------
>
>
> Download:
> ----------
>
> Get GlusterFS: http://ftp.zresearch.com/pub/gluster/glusterfs/CURRENT
>
> Get Patched Fuse: http://ftp.zresearch.com/pub/gluster/glusterfs/fuse/
> - This patched version of fuse is very suited for using with GlusterFS, as:
> * this supports higher IO buffer size, which gives increased IO performance.
> * it provides flock() syscall, which is not present in regular fuse tarball.
> * inode management improvements.
>
> Install:
> ----------
>
> This document describes the install procedure from source tarball.
>
> In the client machine, install the fuse package. (Make sure that '--prefix' option is set to earlier fuse installation path).
>
> ====
> # tar -xzf fuse-2.7.0-glfs3.tar.gz
> # cd fuse-2.7.0-glfs3
> # ./configure --prefix=/usr --enable-kernel-module
> # make install > /dev/null
> # ldconfig
> ====
>
> Now make sure that the system has following packages:
> * fuse [Just got installed]
> * flex
> * bison
>
> Now, untar and install the glusterfs package.
>
> ====
> # tar -xzf glusterfs-1.3.1.tar.gz
> # cd glusterfs-1.3.1/
> # ./configure --prefix= --disable-ibverbs
> # make install > /dev/null
> ====
>
> Congratulations :) You are done with 'glusterfs' installation.
>
>
> Execution:
> ----------
> After the installations a majority of problem faced by many people is that, how to get glusterfs working??. To run GlusterFS, you need a volume specification file in short a spec file, which defines the behavior and features for glusterfs. We will start with a barebone spec file (this spec file is very basic and it is just to get the feel of GlusterFS).
>
>
> ** GlusterFS is Distributed Parallel Filesystem (confused?? think it like an N
> FS replacement for the time being):
>
> **** Example1 [NFS like]
> Assume you have two machines '192.168.0.1' and '192.168.0.2'. Let 192.168.0.1 be server and the latter be client. [NOTE: you can change the IP address in the spec file according to your network configuration or else to test, you can use the systems localhost IP address i.e '127.0.0.1']
>
> -> Server machine: [192.168.0.1]
> [NOTE: After editing the file it sould have content as shown by cat command]
>
> ====
> $ emacs /etc/glusterfs/glusterfs-server.vol
> $ cat /etc/glusterfs/glusterfs-server.vol
> volume brick
> type storage/posix
> option directory /tmp/export
> end-volume
>
> volume server
> type protocol/server
> subvolumes brick
> option auth.ip.brick.allow *
> end-volume
>
> $ glusterfsd -f /etc/glusterfs/glusterfs-server.vol
> ====
>
> -> Client machine: [192.168.0.2]
>
> ====
> $ mkdir /mnt/glusterfs
> $ emacs /etc/glusterfs/glusterfs-client.vol
> $ cat /etc/glusterfs/glusterfs-client.vol
>
> volume client
> type protocol/client
> option remote-host 192.168.0.1
> option remote-subvolume brick
> end-volume
>
> $ glusterfs -f /etc/glusterfs/glusterfs-client.vol /mnt/glusterfs
> ====
>
> Wow, you can see the exported directory '192.168.0.1:/tmp/export' as /mnt/glusterfs in client node :O
>
>
> **** Example 2 [Clustered FileSystem]
>
> Assume you have 4 machines '192.168.0.1' to '192.168.0.4'. Let 3 machines 192.168.0.1, 192.168.0.2 and 192.168.0.3 be servers and other be client. [NOTE: you can change the IP address in the spec file according to your network, or else, to test on the same machine give '127.0.0.1' and different listen ports]
>
> --> Server1 [192.168.0.1]
>
> ====
> $ emacs /etc/glusterfs/glusterfs-server.vol
> $ cat /etc/glusterfs/glusterfs-server.vol
> volume brick
> type storage/posix
> option directory /tmp/export
> end-volume
>
> volume brick-ns
> type storage/posix
> option directory /tmp/export-ns
> end-volume
>
> volume server
> type protocol/server
> subvolumes brick
> option auth.ip.brick.allow *
> option auth.ip.brick-ns.allow *
> end-volume
>
> $ glusterfsd -f /etc/glusterfs/glusterfs-server.vol
> $ df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda1 9.2G 7.6G 1.2G 87% /
> $
>
> ====
>
> --> Server2 [192.168.0.2]
>
> ====
> $ emacs /etc/glusterfs/glusterfs-server.vol
> $ cat /etc/glusterfs/glusterfs-server.vol
> volume brick
> type storage/posix
> option directory /tmp/export
> end-volume
>
> volume server
> type protocol/server
> subvolumes brick
> option auth.ip.brick.allow *
> end-volume
>
> $ glusterfsd -f /etc/glusterfs/glusterfs-server.vol
> $ df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda1 9.2G 7.6G 1.2G 87% /
> $
> ====
>
> --> Server3 [192.168.0.3]
>
> ====
> $ emacs /etc/glusterfs/glusterfs-server.vol
> $ cat /etc/glusterfs/glusterfs-server.vol
> volume brick
> type storage/posix
> option directory /tmp/export
> end-volume
>
> volume server
> type protocol/server
> subvolumes brick
> option auth.ip.brick.allow *
> end-volume
>
> $ glusterfsd -f /etc/glusterfs/glusterfs-server.vol
> $ df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda1 9.2G 7.6G 1.2G 87% /
> $
> ====
>
> --> Client1 [192.168.0.4]
>
>
> ====
> $ mkdir /mnt/glusterfs
> $ emacs /etc/glusterfs/glusterfs-client.vol
> $ cat /etc/glusterfs/glusterfs-client.vol
>
>
> volume client1-ns
> type protocol/client
> option remote-host 192.168.0.1
> option remote-subvolume brick-ns
> end-volume
>
> volume client1
> type protocol/client
> option remote-host 192.168.0.1
> option remote-subvolume brick
> end-volume
>
> volume client2
> type protocol/client
> option remote-host 192.168.0.2
> option remote-subvolume brick
> end-volume
>
> volume client3
> type protocol/client
> option remote-host 192.168.0.3
> option remote-subvolume brick
> end-volume
>
> volume unify
> type cluster/unify
> subvolumes client1 client2 client3
> option namespace client1-ns
> option scheduler rr
> end-volume
>
> $ glusterfs -f /etc/glusterfs/glusterfs-client.vol /mnt/glusterfs
> $ df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda1 9.2G 7.6G 1.2G 87% /
> glusterfs 27.7G 22.9G 3.7G 87% /mnt/glusterfs
> $
> ====
>
> :O You already got your cluster file system working
>
>
> For more details refer Gluster wiki - http://www.gluster.org/docs/index.php/GlusterFS
>
> ------------------------------
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
--
Amar Tumballi
Engineer - Gluster Core Team
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Supercomputing and Superstorage!
More information about the Gluster-devel
mailing list