[Gluster-users] Not mounting
Chad
ccolumbu at hotmail.com
Mon Mar 8 19:38:50 UTC 2010
Someone who knows pipe up here, but I think that I am confused.
Lots of things look wrong to me, but I am not sure what you are trying to do, and what is different between 2.x and 3.x.
I think what you want:
1 (or more) mount points/directories exported from your 2 servers server002 and bee003 to be mounted by your N nodes.
What it looks like you have:
1 mount points/directories exported from servers raidvol-0 and raidvol-1
1 mount points/directories exported from servers server002 and bee003
A distributed volume across all your client nodes (mirrors 2-15...)
Are you exporting 2 mounts from 4 different servers?
Are you really trying to create a raid0 array across all your client nodes?
If you want what I think you want your config files would/should be much simpler.
Here is my guess:
Here's glusterfsd.vol:
**********************************
<snip of cmd line from volgen>
# TRANSPORT-TYPE tcp
# PORT 6996
volume posix
type storage/posix
option directory /glusterfs1
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume brick
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow *
option listen-port 6996
subvolumes brick
end-volume
*************************
glusterfs.vol is
# RAID 1
# TRANSPORT-TYPE tcp
# PORT 6996
volume raidvol-0
type protocol/client
option transport-type tcp
option remote-host server002
option remote-port 6996
option remote-subvolume brick
end-volume
volume raidvol-1
type protocol/client
option transport-type tcp
option remote-host bee003
option remote-port 6996
option remote-subvolume brick
end-volume
volume mirror-0
type cluster/replicate
subvolumes raidvol-0 raidvol-1
end-volume
volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes mirror-0
end-volume
volume io-cache
type performance/io-cache
option cache-size 1GB
subvolumes writebehind
end-volume
**********************************************
Finally your mount point in /etc/fstab on client nodes 1-N would look like:
/etc/glusterfs/glusterfs.vol /remote_glusterfs1 glusterfs defaults 0 0
^C
m.roth at 5-cent.us wrote:
> Chad wrote:
>> m.roth at 5-cent.us wrote:
>>> Chad wrote:
>>>> I never used 2.x so I am not sure if this it the issue or not, but do
>>>> you have an allow line in your glusterfsd.vol file?
>>>> That is how glusterfs authenticates clients, if the server can not
>>>> authenticate the client the mount will fail.
>>>> My allow line looks like this:
>>>> option auth.addr.tcb_brick.allow *
>>>>
>>>> Obviously your auth line would need to change "tcb_brick" to the name
>>>> of your volume export.
>>> Oh. Um, sorry, this is completely unobvious to me. Also, I would have
>>> assumed that glusterfs-volgen would have put the correct name in which
>>> it gleaned from the command line.
>>>
>>> So, if the volume is /glusterfsr1, and the export for mounting is
>>> /export/glusterfs1, tcb_brick should be changed to, um,
>>> /export/glusterfs1?
>>>
>> If you just post your .vol files we can read them and tell you if anything
>> is wrong.
>
> One other thing - is "brick" a reserved name, or does there have to be an
> actual subdirectory called brick?
>
> Here's glusterfsd.vol:
> **********************************
> <snip of cmd line from volgen>
> # TRANSPORT-TYPE tcp
> # PORT 6996
>
> volume posix
> type storage/posix
> option directory /glusterfs1
> end-volume
>
> volume locks
> type features/locks
> subvolumes posix
> end-volume
>
> volume brick
> type performance/io-threads
> option thread-count 8
> subvolumes locks
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp
> option auth.addr.brick.allow *
> option listen-port 6996
> subvolumes brick
> end-volume
> *************************
> And the short version of glusterfs.vol is
> # RAID 1
> # TRANSPORT-TYPE tcp
> # PORT 6996
>
> volume raidvol-0
> type protocol/client
> option transport-type tcp
> option remote-host raidvol-0
> option remote-port 6996
> option remote-subvolume brick
> end-volume
>
> volume raidvol-1
> type protocol/client
> option transport-type tcp
> option remote-host raidvol-1
> option remote-port 6996
> option remote-subvolume brick
> end-volume
>
> volume server002:/glusterfs1
> type protocol/client
> option transport-type tcp
> option remote-host server002:/glusterfs1
> option remote-port 6996
> option remote-subvolume brick
> end-volume
>
> <servers 003-34, minus the several dead ones...>
> volume mirror-0
> type cluster/replicate
> subvolumes raidvol-0 raidvol-1
> end-volume
>
> volume mirror-1
> type cluster/replicate
> subvolumes server002:/glusterfs1 bee003:/glusterfs1
> end-volume
> <mirrors 2-15...>
>
> volume distribute
> type cluster/distribute
> subvolumes mirror-0 mirror-1 mirror-2 mirror-3 mirror-4 mirror-5
> mirror-6 mirror-7 mirror-8 mirror-9 mirror-10 mirror-11 mirror-12
> mirror-13 mirror-14 mirror-15
> end-volume
>
> volume writebehind
> type performance/write-behind
> option cache-size 4MB
> subvolumes distribute
> end-volume
>
> volume io-cache
> type performance/io-cache
> option cache-size 1GB
> subvolumes writebehind
> end-volume
> **********************************************
>
> mark
>
>
>
>
>
>
More information about the Gluster-users
mailing list