[Gluster-devel] Re: GlusterFS: Lame script to simulate several storage nodes on localhost
T0aD
toad at 403.sk
Sun Jan 14 08:52:56 UTC 2007
Same email... with files attached one by one this time and without
tbz2 (apparently the mailing list discarded it).
On 1/13/07, T0aD <toad at 403.sk> wrote:
> Hi everyone,
>
> Following the request of bulde, I'm sending you the lousy scripts I
> wrote to quicky generate some nodes ;)
>
>
> Usage:
> chmod a+x ./create_dir
> chmod a+x ./gen_client_conf
> chmod a+x ./gen_server_conf
> chmod a+x ./storage_cluster
>
> ./storage_cluster <number of server nodes>
>
> It wasn't supposed for release so you will have to arrange the script
> to your environment.
> You will need a sudo, an a glusterfs/ directory which will be the
> mount point of glusterFS.
>
> Example:
>
> toad at vlk:~/gfs$ ./storage_cluster 10
> Starting Storage Cluster with 10 nodes..
> Configuring node #0... up! (dir:/home/cfs0 - port:6996)
> Configuring node #1... up! (dir:/home/cfs1 - port:6997)
> Configuring node #2... up! (dir:/home/cfs2 - port:6998)
> Configuring node #3... up! (dir:/home/cfs3 - port:6999)
> Configuring node #4... up! (dir:/home/cfs4 - port:7000)
> Configuring node #5... up! (dir:/home/cfs5 - port:7001)
> Configuring node #6... up! (dir:/home/cfs6 - port:7002)
> Configuring node #7... up! (dir:/home/cfs7 - port:7003)
> Configuring node #8... up! (dir:/home/cfs8 - port:7004)
> Configuring node #9... up! (dir:/home/cfs9 - port:7005)
> Mounting partition...
> glusterfs:1840 on /home/toad/gfs/glusterfs type fuse
> (rw,allow_other,default_permissions)
> toad at vlk:~/gfs$ touch ./glusterfs/file{1,2,3,4,5,6,7,8,9,10}
> toad at vlk:~/gfs$ find /home/cfs[0-9] -type f
> /home/cfs0/file2
> /home/cfs1/file3
> /home/cfs2/file4
> /home/cfs3/file5
> /home/cfs4/file6
> /home/cfs5/file7
> /home/cfs6/file8
> /home/cfs7/file9
> /home/cfs8/file10
> /home/cfs9/file1
> toad at vlk:~/gfs$
>
> To remove all configuration files and kill glusterfsd processes, just
> run the script uninstall:
>
> toad at vlk:~/gfs$ ./uninstall
> removed storage node 0 files
> removed storage node 1 files
> removed storage node 2 files
> removed storage node 3 files
> removed storage node 4 files
> removed storage node 5 files
> removed storage node 6 files
> removed storage node 7 files
> removed storage node 8 files
> removed storage node 9 files
> kill all instances of glusterfsd
> umounted glusterfs partition
> removed configuration file for client
> toad at vlk:~/gfs$
>
>
> Once again: your software rocks guys, thanks a lot
>
>
>
More information about the Gluster-devel
mailing list