[Gluster-users] answer Re: AFR recovery question

Keith Freedman freedman at FreeFormIT.com
Sat Oct 11 07:57:38 UTC 2008


2 servers which AFR eachother as server & client.
I was pretty happy with it, I think if it could just not care about 
those silly simlink issues it'd be great.  nice to know I can just 
add a brick and it'll just work.  I didn't mind the bit of manual 
intervention since it was minimal and ultimate got the job done.  but 
here's my config to help you understand.
the configs on both servers are identical.


volume home1
   type storage/posix                   # POSIX FS translator
   option directory /gluster/home        # Export this directory
end-volume

volume posix-locks-home1
   type features/posix-locks
   option mandatory on
   subvolumes home1
end-volume

## Reference volume "home2" from remote server
volume home2
   type protocol/client                   # POSIX FS translator
   option transport-type tcp/client
   option remote-host ##.##.##.##       # IP address of remote host
   option remote-subvolume posix-locks-home1     # use home1 on remote host
   option transport-timeout 10           # value in seconds; it 
should be set rel
atively low
end-volume

volume server
   type protocol/server
   option transport-type tcp/server     # For TCP/IP transport
   subvolumes posix-locks-home1
   option auth.addr.posix-locks-home1.allow ##.##.##.##,127.0.0.1 # 
Allow access
  to "home1" volume
end-volume

### Create automatic file replication
volume home
   type cluster/afr
   option read-subvolume posix-locks-home1
   subvolumes posix-locks-home1 home2
#  subvolumes posix-locks-home1
end-volume


At 09:19 PM 10/10/2008, Krishna Srinivas wrote:
>Keith,
>
>It was difficult to follow your mail. What is your setup? I understand
>that when destination of symlinks don't exist it causes problems
>during selfheal, is that right? Do you have server side afr or client
>side?
>
>Krishna
>
>On Fri, Oct 10, 2008 at 4:16 PM, Keith Freedman 
><freedman at freeformit.com> wrote:
> > just a quick status for anyone who cares.
> >
> > once the find seemed to work properly for a few directories I
> > remounted both servers with the proper AFR confi and they seem to be
> > working just fine.  auto-healing serverB as appropriate and sever1
> > getting updated when a file on serverb is updated.
> >
> > :)
> >
> >
> > At 03:22 AM 10/10/2008, Keith Freedman wrote:
> >>no one answered me so I'll just report my findings:
> >>
> >>Server1 and ServerB  full of data..  Gluster 1.4pre5.  FedoraCore9
> >>Disk on serverb crashes, loose everything.  re-install
> >>copied over my AFR config from server1  change IP addresses as appropriate.
> >>
> >>using the find /home/XXXX -type f -print0 | xargs -0 head -c1 > /dev/null
> >>to auto-heal them and it seems to be going just fine.
> >>
> >>once that's finished, I'll re-add serverB to server1's AFR config and
> >>I presume it'll be fine.
> >>
> >>anyway, it was a minor irritation and overall the auto-healing once
> >>going has been a lifesaver.
> >>
> >>Keith
> >>
> >>At 06:21 AM 10/9/2008, Keith Freedman wrote:
> >> >I have 2 servers that AFR eachother.
> >> >
> >> >one of them suffered a drive failure and is being rebuilt.
> >> >
> >> >the question is.  What will happen if I just mount the empty drive
> >> >back as the AFR node.
> >> >
> >> >will it just start grabbing the data from the other server (which is
> >> >exactly what I want), OR
> >> >will it start deleting the data from the other server (which is
> >> terribly bad).
> >> >
> >> >Another thought was to, on the current working server with good data,
> >> >disable the remote afr node from it's config (so it's only using AFR
> >> >on itself), and then leave the other machines config as is, and 
> turn it on.
> >> >This way I can be sure that the node with the data wont go nuts and
> >> >start deleting, but that updates to it will get replicated to the
> >> >other machine.
> >> >
> >> >this particular set is running 1.4pre5 if that changes the answer.
> >> >
> >> >Thanks,
> >> >Keith
> >> >
> >> >
> >> >_______________________________________________
> >> >Gluster-users mailing list
> >> >Gluster-users at gluster.org
> >> >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> >>
> >>
> >>_______________________________________________
> >>Gluster-users mailing list
> >>Gluster-users at gluster.org
> >>http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> >





More information about the Gluster-users mailing list