[Gluster-devel] replicate data between 2 servers and 1 client

Alain Gonzalez alaingonza at gmail.com
Thu Feb 12 09:00:34 UTC 2009


glusterfs send me this error:

2009-02-12 10:00:23 E [xlator.c:117:xlator_set_type] libglusterfs/xlator:
dlopen(/usr/lib/glusterfs/1.3.7/xlator/cluster/ha.so):
/usr/lib/glusterfs/1.3.7/xlator/cluster/ha.so: cannot open shared object
file: No such file or directory

What packages should I install to get ha.so?

Thanks for your help. Forgive me for my bad English

2009/2/11 Raghavendra G <raghavendra at zresearch.com>

> As a side note, you can avoid ha if you're using replicate on client side.
>
>
> On Wed, Feb 11, 2009 at 9:51 PM, Raghavendra G <raghavendra at zresearch.com>wrote:
>
>> Hi Alain,
>>
>> In the configuration you are using, replicate is on server side which
>> means servers replicate among themselves. If the server which the client is
>> currently communicating with goes down, client has to 'switch' to the other
>> server. This functionality can be provided by using High availability (HA)
>> on client side.
>>
>> on client the configuration can be
>>
>> volume client1
>>   type protocol/client
>>   .
>>   .
>>   option remote-host server1
>> end-volume
>>
>>
>> volume client2
>>   type protocol/client
>>   .
>>   .
>>   option remote-host server2
>> end-volume
>>
>> volume ha
>>   type cluster/ha
>>   subvolumes client1 client2
>> end-volume
>>
>> regards,
>>
>>
>> On Wed, Feb 11, 2009 at 4:34 PM, Alain Gonzalez <alaingonza at gmail.com>wrote:
>>
>>> Hi,
>>>
>>> I have a problem. I need to replicate data between three machines. Two of
>>> machines are servers and one is a client.
>>>
>>> If I change data in client, also changed in the two servers. And if I
>>> change any data on a server1, also changed in the other server2 and client.
>>>
>>> I have done tests with the tutorials of glusterfs, but I don´t have good
>>> results.
>>>
>>> Someone who can help me?
>>>
>>> #server1
>>>
>>> volume brick1
>>>    type storage/posix
>>>    option directory /home/export #created
>>> end-volume
>>>
>>> volume brick2
>>>    type protocol/client
>>>    option transport-type tcp/client
>>>
>>>
>>>
>>>    option remote-host 192.168.x.x   # IP address of server2
>>>    option remote-subvolume brick1   # use brick1 on server2
>>> end-volume
>>>
>>> volume afr
>>>    type cluster/afr
>>>    subvolumes brick1 brick2
>>> end-volume
>>>
>>>
>>>
>>> volume server
>>>    type protocol/server
>>>    option transport-type tcp/server
>>>    subvolumes brick1 afr
>>>    option auth.ip.brick1.allow *all
>>>    option auth.ip.afr.allow *all
>>> end-volumevolume brick1
>>>
>>>
>>>
>>> #server2
>>>
>>> volume brick1
>>>    type storage/posix
>>>    option directory /home/export #created
>>> end-volume
>>>
>>> volume brick2
>>>    type protocol/client
>>>    option transport-type tcp/client
>>>    option remote-host 192.168.x.x   # IP address of server1
>>>
>>>
>>>
>>>    option remote-subvolume brick1   # use brick1 on server1
>>> end-volume
>>>
>>> volume afr
>>>    type cluster/afr
>>>    subvolumes brick2 brick1
>>> end-volume
>>>
>>> volume server
>>>    type protocol/server
>>>    option transport-type tcp/server
>>>
>>>
>>>
>>>    subvolumes brick1 afr
>>>    option auth.ip.brick1.allow * #all
>>>    option auth.ip.afr.allow * #all
>>> end-volume
>>>
>>>
>>> #client
>>>
>>> volume brick
>>>    type protocol/client
>>>    option transport-type tcp/client # for TCP/IP transport
>>>
>>>
>>>
>>>    option remote-host 192.168.x.x   # IP address of the server ----> IP of the server1
>>>    option remote-subvolume afr      # name of the remote volume
>>> end-volume
>>>
>>> Best Regards
>>>
>>> --
>>> Alain Gonzalez
>>>
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at nongnu.org
>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>
>>>
>>
>>
>> --
>> Raghavendra G
>>
>>
>
>
> --
> Raghavendra G
>
>


-- 
Alain Gonzalez
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090212/3d6630b5/attachment-0003.html>


More information about the Gluster-devel mailing list