[Gluster-users] Not mounting

Chad ccolumbu at hotmail.com
Mon Mar 8 20:08:18 UTC 2010


I am not offended :)

I understand now what is wrong (I think).
To summerize what you want:
On servers 1-N export the directory's /glusterfsr1 data so it can be mounted by clients (servers can be clients too) on mount point /export/glusterfs.
The /glusterfsr1 should be a raid-1 mirrored array.

Here is what you need:
/etc/glusterfs/glusterfsd.vol (the same as the last one I sent you)
**********************************
<snip of cmd line from volgen>
# TRANSPORT-TYPE tcp
# PORT 6996

volume posix
   type storage/posix
   option directory /glusterfs1
end-volume

volume locks
     type features/locks
     subvolumes posix
end-volume

volume brick
     type performance/io-threads
     option thread-count 8
     subvolumes locks
end-volume

volume server
     type protocol/server
     option transport-type tcp
     option auth.addr.brick.allow *
     option listen-port 6996
     subvolumes brick
end-volume

*************************
/etc/glusterfs/glusterfs.vol is
# RAID 1
# TRANSPORT-TYPE tcp
# PORT 6996

volume raidvol-0
     type protocol/client
     option transport-type tcp
     option remote-host <name of server 0>
     option remote-port 6996
     option remote-subvolume brick
end-volume

volume raidvol-1
     type protocol/client
     option transport-type tcp
     option remote-host <name of server 1>
     option remote-port 6996
     option remote-subvolume brick
end-volume

volume raidvol-2
     type protocol/client
     option transport-type tcp
     option remote-host <name of server 2>
     option remote-port 6996
     option remote-subvolume brick
end-volume

. . .

volume raidvol-N
     type protocol/client
     option transport-type tcp
     option remote-host <name of server N>
     option remote-port 6996
     option remote-subvolume brick
end-volume

volume mirror-0
     type cluster/replicate
     subvolumes raidvol-0 raidvol-1 raidvol-2 . . . raidvol-N
end-volume

volume writebehind
     type performance/write-behind
     option cache-size 4MB
     subvolumes mirror-0
end-volume

volume io-cache
     type performance/io-cache
     option cache-size 1GB
     subvolumes writebehind
end-volume
**********************************************

Then the mount command I gave you last time should be this (it can be used on clients or servers):
/etc/glusterfs/glusterfs.vol	/export/glusterfs	glusterfs	defaults	0 0

^C



m.roth at 5-cent.us wrote:
>> Someone who knows pipe up here, but I think that I am confused.
>> Lots of things look wrong to me, but I am not sure what you are trying to
>> do, and what is different between 2.x and 3.x.
>>
>> I think what you want:
>> 1 (or more) mount points/directories exported from your 2 servers
>> server002 and bee003 to be mounted by your N nodes.
> 
> You missed the <...>. I have 30 nodes. I've created /glusterfsr1 on each,
> and I've created /export/glusterfs on each. The intent is that
> /glusterfsr1 should be part of the glusterfs, and it should be mounted on
> each on /export/glusterfs. ALL of them are servers.
> <snip>
>> Are you exporting 2 mounts from 4 different servers?
> 
> One mount point, from 30 servers. One mount point on each server. At
> least, this is what glusterfs-volgen created as
> /etc/glusterfs/glusterfsr1-export.vol and /etc/glusterfsr1-mount.vol.
> Rather than renaming them, I symlinked them to
> /etc/glusterfs/glusterfsd.vol and /etc/glusterfs/glusterfs.vol,
> respectively.
> 
>> Are you really trying to create a raid0 array across all your client
>> nodes?
> 
> No, all the server nodes. I haven't done anything with a client node yet -
> I can't start glusterfs on my servers.
> 
> Hey, no insult meant to you, Chad, but isn't there *anyone* else on this
> list, who's been working with glusterfs longer? I mean, from the googling
> I've been doing, it looks like 3.0 just came out last fall, and though
> you've been doing your best to help, no one else seems to be stepping up.
> 
>          mark
> 
> 
> 



More information about the Gluster-users mailing list