[Gluster-users] AFR questions

Kirby Zhou kirbyzhou at sohu-rd.com
Sun Dec 7 04:05:55 UTC 2008


En, Of cource I had done it between step step7 and step8 on the client side.
[@123.21 ~]# ll /exports/disk2/xxx
-rw-r--r-- 1 root root 268435456 Dec  7 00:18 /exports/disk2/xxx 

[@123.22 ~]# ll /exports/disk1/xxx
ls: /exports/disk1/xxx: No such file or directory

[@123.25 ~]# md5sum /mnt/xxx
1f5039e50bd66b290c56684d8550c6c2  /mnt/xxx

[@123.22 ~]# ll /exports/disk1/xxx
ls: /exports/disk1/xxx: No such file or directory

[@123.25 ~]# tail -f /var/log/glusterfs/glusterfs.log
!!! nothing more show !!!


-----Original Message-----
From: Keith Freedman [mailto:freedman at FreeFormIT.com] 
Sent: Sunday, December 07, 2008 2:09 AM
To: Kirby Zhou; 'Stas Oskin'; gluster-users at gluster.org
Subject: Re: [Gluster-users] AFR questions

At 09:07 PM 12/5/2008, Kirby Zhou wrote:
>For example:
>
>volume ns-afr0
>   type cluster/afr
>   subvolumes remote-ns1 remote-ns2 remote-ns3 remote-ns4
>end-volume
>
>Anything written to ns-afr0 will be AFRed to all the 4 subvolumnes.
>So  how many copies you want to get, how many subvolumnes you should set.
>
>But I failed to activate the auto-healing function.
>
>Step1:  I create a client-AFR based unify, both ns and storage are AFRed. I
>name the 2 nodes node1 and node2.
>Step2:  glusterfs -s node1 -n unify0 /mnt
>Step3:  cp something /mnt/xxx
>Step4:  Check node1 and node2's storage, found 2 copy of the file xxx.
>Step5:  Stop node2's glusterfd
>Step6:  cat something else >> /mnt/xxx
>Step7:  Stat node2's glusterfd
>Step8:  Sleep 100
>Step9:  Check node2's storage, found the file xxx with no change through

did you cat the file through the gluster mount point or on the 
underlying filesystem.
the auto-heal is automatically "activated"  but it only "heal's" on 
file access, so if you access the file through the gluster mountpoint 
it should find that it's out of date and update from one of the other
servers.

check your gluster log. grep for your filename and see what it might 
say (on both servers)








More information about the Gluster-users mailing list