[Gluster-users] [3.1 Beta]: raid 1 volume with gluster command and some other questions

Amar Tumballi amar at gluster.com
Mon Sep 27 06:22:24 UTC 2010


Hi Christian,

Find the answers Inline

I am new to the list, because i am currently trying out the new 3.1 beta
> glusterfs and cannot find informations on the web how i can create a volume
> with the new gluster command which is also replicated.
>
>
You need to have 'glusterd' started before running any of the gluster
commands.
Sequence of commands looks like below:


bash#  glusterd  #<if its not a RPM install, add this command in rc scripts>
bash#  gluster peer probe <Other Server IP>   # Do this for every storage
server
bash#  gluster volume create test-mirror replica 2 <brick1> <brick2> ...
<brickN>

Where, 'test-mirror' is volume name, (and it can be anything string).. and
brick1 -> brickN are in the format, <server>:<export-dir> format, and N
should be multiple of 2 (as we gave 'replica 2')

Then, you can start export processes using,

bash# gluster volume start test-mirror

You are ready to go... On any client machine (where there is FUSE) you can
do..

bash# mount -t glusterfs server:test-mirror /mnt/glusterfs

If one doesn't have fuse, and want to have NFS clients, do

bash# mount -t nfs server:/test-mirror /mnt/nfs



> The second thing i don't know is how i can activate translators and
> authentication with the new gluster command.
>
>
 I didn't understand the question clearly.. can you re-phrase it? What do
you mean by 'activating translator' and what is 'authenticating with new
gluster command'?



> Is there some "hidden" documentation somewhere?
>
>
Documentation is in progress and we intent to get everything upto the mark
soon. You can find 3.1 related docs here :

http://www.gluster.com/community/documentation/index.php/Gluster_3.1_Release_Notes:_Introduction

http://www.gluster.com/community/documentation/index.php/GlusterFS_3.1beta


And there is nothing 'hidden' if it gets written. The problem is whatever is
in code/working is not yet penned down..

Regards,
Amar Tumballi
(bulde on irc/#gluster)


More information about the Gluster-users mailing list