[Gluster-devel] AFR + Unify problem
Forcey
forcey at gmail.com
Wed May 28 05:32:26 UTC 2008
On Wed, May 28, 2008 at 1:23 PM, Ben Mok <benmok at powerallnetworks.com> wrote:
> Hi !
>
>
>
> I wanna build one large storage pool with redundancy, in my test
> environment, it consist of two controllers and four storage nodes.
>
>
>
> Controller 1 and Controller2
>
> Storage nodes 501 502 503 504
>
> Afr1 501 502
>
> Afr2 503 504
>
> Unify Afr1 and Afr2
>
>
>
> But i got some problem when I did failover testing.
>
>
>
> Case1:
>
> Create file1 from controller1 > file1 locate in 501 502 > down 501 > modify
> file1 > up 501
>
> both controllers also read old data of file1, even run self-heal script and
> the content of file1 have ^@^@^@^@^@^@^@ at the bottom
>
>
>
> Case2:
>
> File2 locate in 503 504 > down 503 > delete file2 > file2 locate in 501 502
>> up 503
>
> One controller can read new data of file2 and one read old data of file2 ,
> and old file2 still locate in 503 even run self-heal script already
>
>
>
> Would you tell me how to solve above issue? Or any wrong setting for my
> configuration?
>
> I afr all namespace for all of storage nodes, is it possible? Because I read
> many doc from yours also have not afr the namespace or just afr two
> namespace only.
>
>
>
> Thank you so much !
>
>
>
> Ben
>
>
Did you sync up all your bricks' system time?
- Forcey
>
> -------------
>
> Server side
>
> -------------
>
> volume storage-ds
>
> type storage/posix
>
> option directory /mnt/gluster/storage
>
> end-volume
>
>
>
> volume storage-ns
>
> type storage/posix
>
> option directory /mnt/gluster/storage-ns
>
> end-volume
>
>
>
> volume server
>
> type protocol/server
>
> option transport-type tcp/server
>
> subvolumes storage-ds
>
> option auth.ip.storage-ds.allow 192.168.10.*,127.0.0.1
>
> option auth.ip.storage-ns.allow 192.168.10.*,127.0.0.1
>
> end-volume
>
>
>
> ------------
>
> Client side
>
> ------------
>
> Volume 501
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.10.81
>
> option transport-timeout 30
>
> option remote-subvolume storage-ds
>
> end-volume
>
>
>
> volume 501-ns
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.10.81
>
> option transport-timeout 30
>
> option remote-subvolume storage-ns
>
> end-volume
>
>
>
> volume 502
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.10.82
>
> option transport-timeout 30
>
> option remote-subvolume storage-ds
>
> end-volume
>
>
>
> volume 502-ns
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.10.82
>
> option transport-timeout 30
>
> option remote-subvolume storage-ns
>
> end-volume
>
>
>
> volume 503
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.10.83
>
> option transport-timeout 30
>
> option remote-subvolume storage-ds
>
> end-volume
>
>
>
> volume 503-ns
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.10.83
>
> option transport-timeout 30
>
> option remote-subvolume storage-ns
>
> end-volume
>
>
>
> volume 504
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.10.84
>
> option transport-timeout 30
>
> option remote-subvolume storage-ds
>
> end-volume
>
>
>
> volume 504-ns
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.10.84
>
> option transport-timeout 30
>
> option remote-subvolume storage-ns
>
> end-volume
>
>
>
> ################################
>
> volume afr-ns
>
> type cluster/afr
>
> subvolumes 501-ns 502-ns 503-ns 504-ns
>
> end-volume
>
>
>
> volume afr1
>
> type cluster/afr
>
> subvolumes 501 502
>
> end-volume
>
>
>
> volume afr2
>
> type cluster/afr
>
> subvolumes 503 504
>
> end-volume
>
>
>
> volume storage-unify
>
> type cluster/unify
>
> subvolumes afr1 afr2
>
> option namespace afr-ns
>
> option scheduler rr
>
> option rr.limits.min-free-disk 5%
>
> end-volume
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
More information about the Gluster-devel
mailing list