[Gluster-devel] Only one "NFS" export in 1.3.0-pre5.1?
Hans Einar Gautun
einar.gautun at statkart.no
Wed Jul 4 14:51:07 UTC 2007
Ok.
First: Todays running setup on version 1.2.3, running in one server
process:
### Export volume "home" with the contents of "/tellus/home" directory.
volume home
type storage/posix # POSIX FS translator
option directory /tellus/home # Export this directory
end-volume
volume local
type storage/posix # POSIX FS translator
option directory /tellus/local # Export this directory
end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
# option transport-type ib-sdp/server # For Infiniband transport
option bind-address 159.162.84.8 # Default is to listen on all
interfaces
option listen-port 6996 # Default is 6996
option client-volume-filename /etc/glusterfs/glusterfs-client.home.vol
subvolumes home
# NOTE: Access to any volume through protocol/server is denied by
# default. You need to explicitly grant access through # "auth"
# option.
option auth.ip.home.allow 159.162.84.* # Allow access to "brick"
volume
#option auth.ip.brick.allow 127.0.0.1 # Allow access to "brick" volume
end-volume
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
# option transport-type ib-sdp/server # For Infiniband transport
option bind-address 159.162.84.8 # Default is to listen on all
interfaces
option listen-port 6997 # Default is 6996
option client-volume-filename /etc/glusterfs/glusterfs-client.local.vol
subvolumes local
# NOTE: Access to any volume through protocol/server is denied by
# default. You need to explicitly grant access through # "auth"
# option.
option auth.ip.local.allow 159.162.84.* # Allow access to "brick"
volume
end-volume
_________________________________________________________________
I want to use the newest version, 1.3.0-pre5, and here is a test config
that doesn't work:
volume tester
type storage/posix # POSIX FS translator
option directory /m1 # Export this directory
end-volume
volume test
#volume io-threads1
type performance/io-threads
option thread-count 4
option cache-size 32MB
subvolumes tester
end-volume
### Add POSIX record locking support to the storage brick
#volume test
# type features/posix-locks
# option mandatory on # enables mandatory locking on all files
# subvolumes io-threads1
#end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
# option transport-type ib-sdp/server # For Infiniband transport
# option bind-address 192.168.1.10 # Default is to listen on all
interfaces
option listen-port 6996 # Default is 6996
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
subvolumes tester
# NOTE: Access to any volume through protocol/server is denied by
# default. You need to explicitly grant access through # "auth"
# option.
option auth.ip.tester.allow 159.162.84.* # Allow access to "brick"
volume
# option auth.ip.trashcan.allow 192.168.* # Allow access to "trashcan"
volume
end-volume
volume tester2
type storage/posix # POSIX FS translator
option directory /m2 # Export this directory
end-volume
volume test2
#volume io-threads2
type performance/io-threads
option thread-count 4
option cache-size 32MB
subvolumes tester2
end-volume
### Add POSIX record locking support to the storage brick
#volume test2
# type features/posix-locks
# option mandatory on # enables mandatory locking on all files
# subvolumes io-threads2
#end-volume
### Add 'trashcan' support, which stores the deleted files in '/.trash'
dir
#volume trashcan
# type features/trash
# subvolumes brick
#end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
# option transport-type ib-sdp/server # For Infiniband transport
# option bind-address 192.168.1.10 # Default is to listen on all
interfaces
option listen-port 6997 # Default is 6996
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
subvolumes tester2
# NOTE: Access to any volume through protocol/server is denied by
# default. You need to explicitly grant access through # "auth"
# option.
option auth.ip.tester2.allow 159.162.84.* # Allow access to "brick"
volume
# option auth.ip.trashcan.allow 192.168.* # Allow access to "trashcan"
volume
end-volume
Only the first volume is mountable, if any. If I remove one definition
the config works.
regards
Einar
On ons, 2007-07-04 at 19:39 +0530, Anand Avati wrote:
> Hans,
> I dint quite understand your question. Can you please attach your
> configs and mention what exactly did not work?
>
> regards,
> avati
>
>
> 2007/7/4, Hans Einar Gautun < einar.gautun at statkart.no>:
> Hi all,
>
> I'm not able to use my old server config declaring two
> different
> directories for plain "NFS" export.
> Is this right, or is the syntax just different?
> Or do I have to start another server process for the second
> directory?
>
> Thanks in advance.
>
> --
> Einar Gautun gauhan at statkart.no
>
> Statens kartverk | Norwegian Mapping Authority
> 3507 Hønefoss | NO-3507 Hønefoss, Norway
>
> Ph +47 32118372 Fax +47 32118101 Mob +47 92692662
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
>
> --
> Anand V. Avati
--
Einar Gautun gauhan at statkart.no
Statens kartverk | Norwegian Mapping Authority
3507 Hønefoss | NO-3507 Hønefoss, Norway
Ph +47 32118372 Fax +47 32118101 Mob +47 92692662
More information about the Gluster-devel
mailing list