[Gluster-devel] replicate data between 2 servers and 1 client

Alain Gonzalez alaingonza at gmail.com
Thu Feb 12 13:49:44 UTC 2009


I configured with afr option, but when I created, modified or delete a file
on server2, don´t replicate on other server and client. Replicated between
server1 and client.

server1:

volume brick
 type storage/posix
 option directory /home/export/
end-volume

### Add network serving capability to above brick.
volume server
 type protocol/server
 option transport-type tcp
 subvolumes brick
 option auth.addr.brick.allow * # Allow access to "brick" volume
end-volume

server2:

volume brick
 type storage/posix
 option directory /home/export/
end-volume

### Add network serving capability to above brick.
volume server
 type protocol/server
 option transport-type tcp
 subvolumes brick
 option auth.addr.brick.allow * # Allow access to "brick" volume
end-volume

client:

### Add client feature and attach to remote subvolume of server1
volume brick1
 type protocol/client
 option transport-type tcp
 option remote-host 192.168.240.227      # IP address of the remote brick
 option remote-subvolume brick        # name of the remote volume
end-volume

### Add client feature and attach to remote subvolume of server2
volume brick2
 type protocol/client
 option transport-type tcp
 option remote-host 192.168.240.228      # IP address of the remote brick
 option remote-subvolume brick        # name of the remote volume
end-volume

volume afr
 type cluster/afr
 subvolumes brick1 brick2
end-volume

What I do wrong?

2009/2/12 Raghavendra G <raghavendra at zresearch.com>

> Hi Alain,
>
>
> On Thu, Feb 12, 2009 at 3:57 PM, Alain Gonzalez <alaingonza at gmail.com>wrote:
>
>> OK. I installed glusterfs-2.0.0rc on servers and client but If I changed
>> data on Client, data no chaged in two servers. Data changed on server1 only.
>> I need data changed on two servers when I changed data on client and when I
>> changed data on server1 or server2, changed data on other server and client.
>
>
> you need to use replicate (afr) to achieve this functionality.
>
>
>>
>>
>> My configuration:
>>
>> ###server1:
>> #execution command:  glusterfsd -f
>> /usr/local/etc/glusterfs/glusterfs-server.vol
>> #file config:
>>
>> volume brick
>>     type storage/posix
>>     option directory /home/export
>>  end-volume
>>
>> volume server
>>     type protocol/server
>>     option transport-type tcp/server
>>     subvolumes brick
>>     option auth.ip.brick.allow * # Allow access to brick
>> end-volume
>>
>> ###server2:
>> #execution command:  glusterfsd -f
>> /usr/local/etc/glusterfs/glusterfs-server.vol
>> #file config:
>>
>> volume brick
>>     type storage/posix
>>     option directory /home/export
>> end-volume
>>
>> volume server
>>     type protocol/server
>>     option transport-type tcp/server
>>     subvolumes brick
>>     option auth.ip.brick.allow * # Allow access to brick
>> end-volume
>>
>> ###client:
>> #execution command:  glusterfs -f
>> /usr/local/etc/glusterfs/glusterfs-client.vol  /mnt/glusterfs
>> #config file:
>>
>> volume brick1
>>     type protocol/client
>>     option transport-type tcp/client # for TCP/IP transport
>>     option remote-host 192.168.240.227   # IP address of server1
>>     option remote-subvolume brick    # name of the remote volume on
>> server1
>> end-volume
>>
>> volume brick2
>>     type protocol/client
>>     option transport-type tcp/client # for TCP/IP transport
>>     option remote-host 192.168.240.228   # IP address of server2
>>     option remote-subvolume brick    # name of the remote volume on
>> server2
>> end-volume
>>
>> volume ha
>>    type cluster/ha
>>    subvolumes brick1 brick2
>> end-volume
>>
>> Regards and thanks for help ;)
>>
>> 2009/2/12 Amar Tumballi (bulde) <amar at gluster.com>
>>
>> you need to install glusterfs-2.0.0rc1, glusterfs v1.3.7 doesn't have HA
>>>
>>> Regards,
>>> Amar
>>>
>>> 2009/2/12 Alain Gonzalez <alaingonza at gmail.com>
>>>
>>> glusterfs send me this error:
>>>>
>>>> 2009-02-12 10:00:23 E [xlator.c:117:xlator_set_type]
>>>> libglusterfs/xlator: dlopen(/usr/lib/glusterfs/1.3.7/xlator/cluster/ha.so):
>>>> /usr/lib/glusterfs/1.3.7/xlator/cluster/ha.so: cannot open shared object
>>>> file: No such file or directory
>>>>
>>>> What packages should I install to get ha.so?
>>>>
>>>> Thanks for your help. Forgive me for my bad English
>>>>
>>>> 2009/2/11 Raghavendra G <raghavendra at zresearch.com>
>>>>
>>>> As a side note, you can avoid ha if you're using replicate on client
>>>>> side.
>>>>>
>>>>>
>>>>> On Wed, Feb 11, 2009 at 9:51 PM, Raghavendra G <
>>>>> raghavendra at zresearch.com> wrote:
>>>>>
>>>>>> Hi Alain,
>>>>>>
>>>>>> In the configuration you are using, replicate is on server side which
>>>>>> means servers replicate among themselves. If the server which the client is
>>>>>> currently communicating with goes down, client has to 'switch' to the other
>>>>>> server. This functionality can be provided by using High availability (HA)
>>>>>> on client side.
>>>>>>
>>>>>> on client the configuration can be
>>>>>>
>>>>>> volume client1
>>>>>>   type protocol/client
>>>>>>   .
>>>>>>   .
>>>>>>   option remote-host server1
>>>>>> end-volume
>>>>>>
>>>>>>
>>>>>> volume client2
>>>>>>   type protocol/client
>>>>>>   .
>>>>>>   .
>>>>>>   option remote-host server2
>>>>>> end-volume
>>>>>>
>>>>>> volume ha
>>>>>>   type cluster/ha
>>>>>>   subvolumes client1 client2
>>>>>> end-volume
>>>>>>
>>>>>> regards,
>>>>>>
>>>>>>
>>>>>> On Wed, Feb 11, 2009 at 4:34 PM, Alain Gonzalez <alaingonza at gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I have a problem. I need to replicate data between three machines.
>>>>>>> Two of machines are servers and one is a client.
>>>>>>>
>>>>>>> If I change data in client, also changed in the two servers. And if I
>>>>>>> change any data on a server1, also changed in the other server2 and client.
>>>>>>>
>>>>>>> I have done tests with the tutorials of glusterfs, but I don´t have
>>>>>>> good results.
>>>>>>>
>>>>>>> Someone who can help me?
>>>>>>>
>>>>>>> #server1
>>>>>>>
>>>>>>> volume brick1
>>>>>>>    type storage/posix
>>>>>>>    option directory /home/export #created
>>>>>>> end-volume
>>>>>>>
>>>>>>> volume brick2
>>>>>>>    type protocol/client
>>>>>>>    option transport-type tcp/client
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>    option remote-host 192.168.x.x   # IP address of server2
>>>>>>>    option remote-subvolume brick1   # use brick1 on server2
>>>>>>> end-volume
>>>>>>>
>>>>>>> volume afr
>>>>>>>    type cluster/afr
>>>>>>>    subvolumes brick1 brick2
>>>>>>> end-volume
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> volume server
>>>>>>>    type protocol/server
>>>>>>>    option transport-type tcp/server
>>>>>>>    subvolumes brick1 afr
>>>>>>>    option auth.ip.brick1.allow *all
>>>>>>>    option auth.ip.afr.allow *all
>>>>>>> end-volumevolume brick1
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> #server2
>>>>>>>
>>>>>>> volume brick1
>>>>>>>    type storage/posix
>>>>>>>    option directory /home/export #created
>>>>>>> end-volume
>>>>>>>
>>>>>>> volume brick2
>>>>>>>    type protocol/client
>>>>>>>    option transport-type tcp/client
>>>>>>>    option remote-host 192.168.x.x   # IP address of server1
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>    option remote-subvolume brick1   # use brick1 on server1
>>>>>>> end-volume
>>>>>>>
>>>>>>> volume afr
>>>>>>>    type cluster/afr
>>>>>>>    subvolumes brick2 brick1
>>>>>>> end-volume
>>>>>>>
>>>>>>> volume server
>>>>>>>    type protocol/server
>>>>>>>    option transport-type tcp/server
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>    subvolumes brick1 afr
>>>>>>>    option auth.ip.brick1.allow * #all
>>>>>>>    option auth.ip.afr.allow * #all
>>>>>>> end-volume
>>>>>>>
>>>>>>>
>>>>>>> #client
>>>>>>>
>>>>>>> volume brick
>>>>>>>    type protocol/client
>>>>>>>    option transport-type tcp/client # for TCP/IP transport
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>    option remote-host 192.168.x.x   # IP address of the server ----> IP of the server1
>>>>>>>    option remote-subvolume afr      # name of the remote volume
>>>>>>> end-volume
>>>>>>>
>>>>>>> Best Regards
>>>>>>>
>>>>>>> --
>>>>>>> Alain Gonzalez
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Gluster-devel mailing list
>>>>>>> Gluster-devel at nongnu.org
>>>>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Raghavendra G
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Raghavendra G
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Alain Gonzalez
>>>>
>>>> _______________________________________________
>>>> Gluster-devel mailing list
>>>> Gluster-devel at nongnu.org
>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>>
>>>>
>>>
>>>
>>> --
>>> Amar Tumballi
>>> Gluster/GlusterFS Hacker
>>> [bulde on #gluster/irc.gnu.org]
>>> http://www.zresearch.com - Commoditizing Super Storage!
>>>
>>
>>
>>
>> --
>> Alain Gonzalez
>>
>
>
> regards,
> --
> Raghavendra G
>
>


-- 
Alain Gonzalez
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090212/f1c67c9e/attachment-0003.html>


More information about the Gluster-devel mailing list