[Gluster-users] simple afr client setup
Adrian Terranova
aterranova at gmail.com
Sun May 3 04:42:04 UTC 2009
Hello all,
I've setup AFR - and am very impressed with the product - however - when I
do a delete of /home/export1 and /home/export2 - what needs to happen for
autoheal to happen? (I would like to understand this in some detail before
implementing for my home directory data (mostly - just trying to work out
the procedure for adding / replacing a volume)- I tried remounting the
client, and restarting the server with a couple of find variations- none
seemed to work) Is this an artifact of my one host setup or something?
New files seems to show up - but the existing files / directories don't seem
to come back when I read them.
How would I get my files back onto replaced subvolumes?
--Adrian
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
[snip]server
peril at mythbuntufe-desktop:/etc/glusterfs$ grep -v \^# glusterfs-server.vol
|more
volume posix1
type storage/posix # POSIX FS translator
option directory /home/export1 # Export this directory
end-volume
volume brick1
type features/posix-locks
option mandatory on # enables mandatory locking on all files
subvolumes posix1
end-volume
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
option listen-port 6996 # Default is 6996
subvolumes brick1
option auth.ip.brick1.allow * # access to "brick" volume
end-volume
volume posix2
type storage/posix # POSIX FS translator
option directory /home/export2 # Export this directory
end-volume
volume brick2
type features/posix-locks
option mandatory on # enables mandatory locking on all files
subvolumes posix2
end-volume
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
option listen-port 6997 # Default is 6996
subvolumes brick2
option auth.ip.brick2.allow * # Allow access to "brick" volume
end-volume
volume posix3
type storage/posix # POSIX FS translator
option directory /home/export3 # Export this directory
end-volume
volume brick3
type features/posix-locks
option mandatory on # enables mandatory locking on all files
subvolumes posix3
end-volume
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
option listen-port 6998 # Default is 6996
subvolumes brick3
option auth.ip.brick3.allow * # access to "brick" volume
end-volume
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
[snip]client
peril at mythbuntufe-desktop:/etc/glusterfs$ grep -v \^# glusterfs-server.vol
|more
volume posix1
type storage/posix # POSIX FS translator
option directory /home/export1 # Export this directory
end-volume
volume brick1
type features/posix-locks
option mandatory on # enables mandatory locking on all files
subvolumes posix1
end-volume
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
option listen-port 6996 # Default is 6996
subvolumes brick1
option auth.ip.brick1.allow * # access to "brick" volume
end-volume
volume posix2
type storage/posix # POSIX FS translator
option directory /home/export2 # Export this directory
end-volume
volume brick2
type features/posix-locks
option mandatory on # enables mandatory locking on all files
subvolumes posix2
end-volume
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
option listen-port 6997 # Default is 6996
subvolumes brick2
option auth.ip.brick2.allow * # Allow access to "brick" volume
end-volume
volume posix3
type storage/posix # POSIX FS translator
option directory /home/export3 # Export this directory
end-volume
volume brick3
type features/posix-locks
option mandatory on # enables mandatory locking on all files
subvolumes posix3
end-volume
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
option listen-port 6998 # Default is 6996
subvolumes brick3
option auth.ip.brick3.allow * # access to "brick" volume
end-volume
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090503/ad688dbb/attachment.html>
More information about the Gluster-users
mailing list