[Gluster-users] simple afr client setup

Adrian Terranova aterranova at gmail.com
Sun May 3 23:46:06 UTC 2009


ok - I figured it out (at least for version 1.3 that ships with ubuntu - as
long as there the files exist (even at 0 length) - on ehe new volume-  it
will start to sync them. I'll go to the latest release and try out a few
more scenarios.

(That's good enuff for me - I'm not looking for perfect just something
reasonable.)

-Adrian

On Sun, May 3, 2009 at 12:48 AM, Adrian Terranova <aterranova at gmail.com>wrote:

> crap - just realized I cut pasted the server2x (sorry about that.)
>
> volume client3
>   type protocol/client
>   option transport-type tcp/client     # for TCP/IP transport
>   option remote-host 127.0.0.1      # IP address of the remote brick
>   option remote-port 6998              # default server port is 6996
>   option remote-subvolume brick3        # name of the remote volume
> end-volume
>
> ## Add AFR (Automatic File Replication) feature.
> volume afr
>   type cluster/afr
>   subvolumes client3 client1 client2
> #  option replicate *:3
> end-volume
>
>
> [snip]
>
> On Sun, May 3, 2009 at 12:42 AM, Adrian Terranova <aterranova at gmail.com>wrote:
>
>> Hello all,
>>
>> I've setup AFR - and am very impressed with the product - however - when I
>> do a delete of /home/export1 and /home/export2 - what needs to happen for
>> autoheal to happen? (I would like to understand this in some detail before
>> implementing for my home directory data (mostly - just trying to work out
>> the procedure for adding / replacing a volume)-  I tried remounting the
>> client, and restarting the server with a couple of find variations- none
>> seemed to work) Is this an artifact of my one host setup or something?
>>
>> New files seems to show up - but the existing files / directories don't
>> seem to come back when I read them.
>>
>> How would I get my files back onto replaced subvolumes?
>>
>> --Adrian
>>
>>
>>
>> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
>> [snip]server
>> peril at mythbuntufe-desktop:/etc/glusterfs$ grep -v \^#
>> glusterfs-server.vol |more
>>
>>
>>
>> volume posix1
>>   type storage/posix                    # POSIX FS translator
>>   option directory /home/export1        # Export this directory
>> end-volume
>>
>> volume brick1
>>   type features/posix-locks
>>   option mandatory on          # enables mandatory locking on all files
>>   subvolumes posix1
>> end-volume
>>
>> volume server
>>   type protocol/server
>>   option transport-type tcp/server     # For TCP/IP transport
>>  option listen-port 6996              # Default is 6996
>>   subvolumes brick1
>>   option auth.ip.brick1.allow *         # access to "brick" volume
>> end-volume
>>
>>
>>
>> volume posix2
>>   type storage/posix                    # POSIX FS translator
>>   option directory /home/export2        # Export this directory
>> end-volume
>>
>> volume brick2
>>   type features/posix-locks
>>   option mandatory on          # enables mandatory locking on all files
>>   subvolumes posix2
>> end-volume
>>
>> volume server
>>   type protocol/server
>>   option transport-type tcp/server     # For TCP/IP transport
>>   option listen-port 6997              # Default is 6996
>>   subvolumes brick2
>>   option auth.ip.brick2.allow * # Allow access to "brick" volume
>> end-volume
>>
>>
>>
>>
>> volume posix3
>>   type storage/posix                    # POSIX FS translator
>>   option directory /home/export3        # Export this directory
>> end-volume
>>
>> volume brick3
>>   type features/posix-locks
>>   option mandatory on          # enables mandatory locking on all files
>>   subvolumes posix3
>> end-volume
>>
>> volume server
>>   type protocol/server
>>   option transport-type tcp/server     # For TCP/IP transport
>>   option listen-port 6998              # Default is 6996
>>   subvolumes brick3
>>   option auth.ip.brick3.allow *         # access to "brick" volume
>> end-volume
>>
>>
>> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
>>
>> [snip]client
>> peril at mythbuntufe-desktop:/etc/glusterfs$ grep -v \^#
>> glusterfs-server.vol |more
>>
>>
>>
>> volume posix1
>>   type storage/posix                    # POSIX FS translator
>>   option directory /home/export1        # Export this directory
>> end-volume
>>
>> volume brick1
>>   type features/posix-locks
>>   option mandatory on          # enables mandatory locking on all files
>>   subvolumes posix1
>> end-volume
>>
>> volume server
>>   type protocol/server
>>   option transport-type tcp/server     # For TCP/IP transport
>>  option listen-port 6996              # Default is 6996
>>   subvolumes brick1
>>   option auth.ip.brick1.allow *         # access to "brick" volume
>> end-volume
>>
>>
>>
>> volume posix2
>>   type storage/posix                    # POSIX FS translator
>>   option directory /home/export2        # Export this directory
>> end-volume
>>
>> volume brick2
>>   type features/posix-locks
>>   option mandatory on          # enables mandatory locking on all files
>>   subvolumes posix2
>> end-volume
>>
>> volume server
>>   type protocol/server
>>   option transport-type tcp/server     # For TCP/IP transport
>>   option listen-port 6997              # Default is 6996
>>   subvolumes brick2
>>   option auth.ip.brick2.allow * # Allow access to "brick" volume
>> end-volume
>>
>>
>>
>>
>> volume posix3
>>   type storage/posix                    # POSIX FS translator
>>   option directory /home/export3        # Export this directory
>> end-volume
>>
>> volume brick3
>>   type features/posix-locks
>>   option mandatory on          # enables mandatory locking on all files
>>   subvolumes posix3
>> end-volume
>>
>> volume server
>>   type protocol/server
>>   option transport-type tcp/server     # For TCP/IP transport
>>   option listen-port 6998              # Default is 6996
>>   subvolumes brick3
>>   option auth.ip.brick3.allow *         # access to "brick" volume
>> end-volume
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090503/366a4335/attachment.html>


More information about the Gluster-users mailing list