[Gluster-devel] GlusterFS 1.3.0-pre2.3: AFR not working
Gerry Reno
greno at verizon.net
Wed Apr 4 13:30:50 UTC 2007
Avati,
Yes, of course, it works. So it is similar to DRBD where you must only
interact via the exposed mounts and never directly to the underlying
subsystem.
Gerry
Anand Avati wrote:
> Gerry,
> you have to touch via /mnt/glusterfs, not in the backend directly!
>
> avati
>
>
> On Tue, Apr 03, 2007 at 04:50:22PM -0400, Gerry Reno wrote:
>
>> I have not been successful at getting GlusterFS with AFR translator
>> working on 2 bricks:
>>
>> =====================
>> test-server0.vol
>> =====================
>> volume brick
>> type storage/posix # POSIX FS translator
>> option directory /root/export0 # Export this directory
>> end-volume
>>
>> ### Add network serving capability to above brick.
>> volume server
>> type protocol/server
>> option transport-type tcp/server # For TCP/IP transport
>> option listen-port 6996 # Default is 6996
>> subvolumes brick
>> option auth.ip.brick.allow * # Allow full access to "brick" volume
>> end-volume
>>
>> =====================
>> test-server1.vol
>> =====================
>> volume brick
>> type storage/posix # POSIX FS translator
>> option directory /root/export1 # Export this directory
>> end-volume
>>
>> ### Add network serving capability to above brick.
>> volume server
>> type protocol/server
>> option transport-type tcp/server # For TCP/IP transport
>> option listen-port 6997 # Default is 6996
>> subvolumes brick
>> option auth.ip.brick.allow * # Allow full access to "brick" volume
>> end-volume
>>
>> =====================
>> test-client.vol
>> =====================
>> ### Add client feature and declare local subvolume
>> volume client1-local
>> type storage/posix
>> option directory /root/export0
>> end-volume
>>
>> volume client2-local
>> type storage/posix
>> option directory /root/export1
>> end-volume
>>
>> ### Add client feature and attach to remote subvolume
>> volume client1
>> type protocol/client
>> option transport-type tcp/client # for TCP/IP transport
>> option remote-host 192.168.1.25 # IP address of the remote brick
>> option remote-port 6996 # default server port is 6996
>> option remote-subvolume brick # name of the remote volume
>> end-volume
>>
>> volume client2
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host 192.168.1.25
>> option remote-port 6997
>> option remote-subvolume brick
>> end-volume
>>
>> ### Add automatice file replication (AFR) feature
>> volume afr
>> type cluster/afr
>> subvolumes client1 client2
>> option replicate *:2
>> end-volume
>>
>> =====================
>> Servers are started like this:
>> glusterfsd --spec-file=/usr/local/etc/glusterfs/test-server0.vol
>> glusterfsd --spec-file=/usr/local/etc/glusterfs/test-server1.vol
>>
>> Client is started like this:
>> glusterfs --spec-file=./test-client.vol /mnt/glusterfs/
>> =====================
>>
>> [root at grp-01-30-01 glusterfs]# touch /root/export0/file1
>> wait...
>> [root at grp-01-30-01 glusterfs]# find /root/export*
>> /root/export0
>> /root/export0/file1
>> /root/export1
>>
>> =====================
>> I do not see any replication.
>> What am I missing?
>>
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>>
>
>
More information about the Gluster-devel
mailing list