[Gluster-devel] replicate data between 2 servers and 1 client

Alain Gonzalez alaingonza at gmail.com
Fri Feb 13 11:16:52 UTC 2009


Hi,

With your help I have these results:

1- Changed data on server1, data changed on server2 and client. OK
2- Changed data on client, data changed on server1 and server2. OK
3- Changed data on server2, data *no *changed on server1 and client. :(

config:

#server1

volume ser01
 type storage/posix
 option directory /home/export/
end-volume

volume ser011
 type features/locks
 subvolumes ser01
end-volume

### Add network serving capability to above brick.
volume server
 type protocol/server
 option transport-type tcp
 subvolumes ser011
 option auth.addr.ser01.allow * # Allow access to "ser01" volume
 option auth.addr.ser011.allow * # Allow access to "ser011" volume
end-volume

#server2

volume ser02
 type storage/posix
 option directory /home/export/
end-volume

volume ser022
 type features/locks
 subvolumes ser02
end-volume

### Add network serving capability to above brick.
volume server
 type protocol/server
 option transport-type tcp
 subvolumes ser022
 option auth.addr.ser02.allow * # Allow access to "ser02" volume
 option auth.addr.ser022.allow * # Allow access to "ser022" volume
end-volume

#client

### Add client feature and attach to remote subvolume of server1
volume cli01
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.240.227      # IP address of the remote brick
 option remote-subvolume ser011          # name of the remote volume
end-volume

### Add client feature and attach to remote subvolume of server2
volume cli02
 type protocol/client
 option transport-type tcp
 option remote-host 192.168.240.228      # IP address of the remote brick
 option remote-subvolume ser022          # name of the remote volume
end-volume

volume afr
 type cluster/afr
 subvolumes cli01 cli02
end-volume

Regards

2009/2/13 Krishna Srinivas <krishna at zresearch.com>

> On Fri, Feb 13, 2009 at 1:50 PM, Alain Gonzalez <alaingonza at gmail.com>
> wrote:
> > I changed vol files, because I need data replicated on three machine (two
> > servers and one client). If I changed data on 1 machine, data must be
> > changed on the other two machines...
> >
> > My actual vol file:
> >
> > Server1:
> >
> > volume brick
> >  type storage/posix
> >  option directory /home/export/
> > end-volume
> >
> >
> > ### Add network serving capability to above brick.
> > volume server
> >  type protocol/server
> >  option transport-type tcp
> >  subvolumes brick
> >  option auth.addr.brick.allow * # Allow access to "brick" volume
> > end-volume
> >
> > Server2
> >
> > volume brick
> >  type storage/posix
> >  option directory /home/export/
> > end-volume
> >
> > ### Add network serving capability to above brick.
> > volume server
> >  type protocol/server
> >  option transport-type tcp
> >  subvolumes brick
> >  option auth.addr.brick.allow * # Allow access to "brick" volume
> > end-volume
> >
> > Client:
> >
> > ### Add client feature and attach to remote subvolume of server1
> > volume brick1
> >  type protocol/client
> >  option transport-type tcp
> >  option remote-host 192.168.240.227      # IP address of the remote brick
> >  option remote-subvolume brick           # name of the remote volume
> > end-volume
> >
> > ### Add client feature and attach to remote subvolume of server2
> > volume brick2
> >  type protocol/client
> >  option transport-type tcp
> >  option remote-host 192.168.240.228      # IP address of the remote brick
> >  option remote-subvolume brick           # name of the remote volume
> > end-volume
> >
> > volume afr
> >  type cluster/afr
> >  subvolumes brick1 brick2
> > end-volume
> >
> > Raghavendra G said me that in Glusterfs 2.0 requires posix-locks. I
> proved
> > changed "type storage/posix" for "type features/posix-locks" but not
> working
> > correctly.
> >
>
>
> You need to have a separate "features/locks" translator between
> "storage/posix" and "protocol/server" volumes.
>



-- 
Alain Gonzalez
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090213/ee2e6c30/attachment-0003.html>


More information about the Gluster-devel mailing list