[Gluster-devel] mainline--2.5--patch-258: still no AFR replication

Gerry Reno greno at verizon.net
Mon Jul 2 15:40:59 UTC 2007


Anand Avati wrote:
> Gerry,
>   AFR sync (self-heals) on the next open to the file. so 'find 
> /mnt/gluster -type f -exec cat {} \;' >/dev/null should replicate it.
>
> avati
>
> 2007/7/2, Gerry Reno < greno at verizon.net <mailto:greno at verizon.net>>:
>
>     I compiled the latest tla but still I do not see any AFR replication
>     happening in my setup:
>     ============================================
>     [root at grp-01-20-01 glusterfs]# find /mnt/glusterfs*
>     /mnt/glusterfs0
>     /mnt/glusterfs0/file0
>     /mnt/glusterfs0/file2
>     /mnt/glusterfs0/file1
>     /mnt/glusterfs1
>     /mnt/glusterfs1/file0
>     /mnt/glusterfs1/file2
>     /mnt/glusterfs1/file1
>     /mnt/glusterfs2
>     /mnt/glusterfs2/file0
>     /mnt/glusterfs2/file2
>     /mnt/glusterfs2/file1
>     /mnt/glusterfs3
>     /mnt/glusterfs3/file0
>     /mnt/glusterfs3/file2
>     /mnt/glusterfs3/file1
>     ============================================
>     [root at grp-01-20-01 glusterfs]# service iptables stop
>     Flushing firewall rules:                                   [  OK  ]
>     Setting chains to policy ACCEPT: filter                    [  OK  ]
>     Unloading iptables modules:                                [  OK  ]
>     ============================================
>     [root at grp-01-20-01 glusterfs]# touch /mnt/glusterfs0/file7
>     [root at grp-01-20-01 glusterfs]# find /mnt/glusterfs*
>     /mnt/glusterfs0
>     /mnt/glusterfs0/file7    <----
>     /mnt/glusterfs0/file0
>     /mnt/glusterfs0/file2
>     /mnt/glusterfs0/file1
>     /mnt/glusterfs1
>     /mnt/glusterfs1/file0
>     /mnt/glusterfs1/file2
>     /mnt/glusterfs1/file1
>     /mnt/glusterfs2
>     /mnt/glusterfs2/file0
>     /mnt/glusterfs2/file2
>     /mnt/glusterfs2/file1
>     /mnt/glusterfs3
>     /mnt/glusterfs3/file0
>     /mnt/glusterfs3/file2
>     /mnt/glusterfs3/file1
>     ============================================
>     [root at grp-01-20-01 glusterfs]# ps -e | grep gluster
>     29627 ?        00:00:00 glusterfsd
>     29630 ?        00:00:00 glusterfsd
>     29633 ?        00:00:00 glusterfsd
>     29636 ?        00:00:00 glusterfsd
>     29646 ?        00:00:00 glusterfs
>     29650 ?        00:00:00 glusterfs
>     29654 ?        00:00:00 glusterfs
>     29658 ?        00:00:00 glusterfs
>     ============================================
>     [root at grp-01-20-01 glusterfs]# mount | grep gluster
>     glusterfs on /mnt/glusterfs0 type fuse
>     (rw,nosuid,nodev,allow_other,default_permissions,max_read=1048576)
>     glusterfs on /mnt/glusterfs1 type fuse
>     (rw,nosuid,nodev,allow_other,default_permissions,max_read=1048576)
>     glusterfs on /mnt/glusterfs2 type fuse
>     (rw,nosuid,nodev,allow_other,default_permissions,max_read=1048576)
>     glusterfs on /mnt/glusterfs3 type fuse
>     (rw,nosuid,nodev,allow_other,default_permissions,max_read=1048576)
>     ============================================
>     [root at grp-01-20-01 glusterfs]# for i in 0 1 2 3; do echo;cat
>     test-server$i.vol;done
>
>     volume brick
>       type storage/posix                   # POSIX FS translator
>       option directory /root/export0        # Export this directory
>     end-volume
>
>        ### Add network serving capability to above brick.
>     volume server
>       type protocol/server
>       option transport-type tcp/server     # For TCP/IP transport
>       option listen-port 6996              # Default is 6996
>       subvolumes brick
>       option auth.ip.brick.allow  *  # Allow full access to "brick" volume
>     end-volume
>
>     volume brick
>       type storage/posix                   # POSIX FS translator
>       option directory /root/export1        # Export this directory
>     end-volume
>
>        ### Add network serving capability to above brick.
>     volume server
>       type protocol/server
>       option transport-type tcp/server     # For TCP/IP transport
>       option listen-port 6997              # Default is 6996
>       subvolumes brick
>       option auth.ip.brick.allow  *  # Allow full access to "brick" volume
>     end-volume
>
>     volume brick
>       type storage/posix                   # POSIX FS translator
>       option directory /root/export2        # Export this directory
>     end-volume
>
>        ### Add network serving capability to above brick.
>     volume server
>       type protocol/server
>       option transport-type tcp/server     # For TCP/IP transport
>       option listen-port 6998              # Default is 6996
>       subvolumes brick
>       option auth.ip.brick.allow  *  # Allow full access to "brick" volume
>     end-volume
>
>     volume brick
>       type storage/posix                   # POSIX FS translator
>       option directory /root/export3        # Export this directory
>     end-volume
>
>        ### Add network serving capability to above brick.
>     volume server
>       type protocol/server
>       option transport-type tcp/server     # For TCP/IP transport
>       option listen-port 6999              # Default is 6996
>       subvolumes brick
>       option auth.ip.brick.allow  *  # Allow full access to "brick" volume
>     end-volume
>     ============================================
>     [root at grp-01-20-01 glusterfs]# cat test-client.vol
>        ### Add client feature and declare local subvolume
>
>        ### Add client feature and attach to remote subvolume
>     volume client0
>       type    protocol/client
>       option  transport-type    tcp/client     # for TCP/IP transport
>       option  remote-host       192.168.1.25 <http://192.168.1.25>   #
>     IP address of the remote brick
>       option  remote-port       6996           # default server port
>     is 6996
>       option  remote-subvolume  brick          # name of the remote volume
>     end-volume
>
>     volume client1
>       type    protocol/client
>       option  transport-type    tcp/client
>       option  remote-host       192.168.1.25 <http://192.168.1.25>
>       option  remote-port       6997
>       option  remote-subvolume  brick
>     end-volume
>
>     volume client2
>       type    protocol/client
>       option  transport-type    tcp/client
>       option  remote-host       192.168.1.25 <http://192.168.1.25>
>       option  remote-port       6998
>       option  remote-subvolume  brick
>     end-volume
>
>     volume client3
>       type    protocol/client
>       option  transport-type    tcp/client
>       option  remote-host       192.168.1.25 <http://192.168.1.25>
>       option  remote-port       6999
>       option  remote-subvolume  brick
>     end-volume
>
>        ### Add automatice file replication (AFR) feature
>     volume afr
>       type  cluster/afr
>       subvolumes  client0 client1 client2 client3
>       option  replicate *:4
>     end-volume
>     ============================================
>     [root at grp-01-20-01 glusterfs]#
>


Avati,
  That won't work as true replication.  If the app is getting a list of 
files in a directory to act upon, they will get a short list and not 
even know that some files are missing.  When we put files out on client0 
we need for those files to replicate to all nodes.

Gerry







More information about the Gluster-devel mailing list