[Gluster-users] AFR questions

Kirby Zhou kirbyzhou at sohu-rd.com
Sat Dec 6 05:07:45 UTC 2008

For example:

volume ns-afr0
  type cluster/afr
  subvolumes remote-ns1 remote-ns2 remote-ns3 remote-ns4

Anything written to ns-afr0 will be AFRed to all the 4 subvolumnes.
So  how many copies you want to get, how many subvolumnes you should set.

But I failed to activate the auto-healing function.

Step1:	I create a client-AFR based unify, both ns and storage are AFRed. I
name the 2 nodes node1 and node2.
Step2:	glusterfs -s node1 -n unify0 /mnt
Step3:	cp something /mnt/xxx
Step4:	Check node1 and node2's storage, found 2 copy of the file xxx.
Step5:	Stop node2's glusterfd
Step6:	cat something else >> /mnt/xxx
Step7:	Stat node2's glusterfd
Step8:	Sleep 100
Step9:	Check node2's storage, found the file xxx with no change through

From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Stas Oskin
Sent: Saturday, December 06, 2008 9:53 AM
To: gluster-users at gluster.org
Subject: [Gluster-users] AFR questions


I have some AFR-related questions which I decided to put to separate email:

1) How can I specify that AFR stores files on separate servers (in order to
prevent data loss when server goes down)?

2) How can I specify how many copies of the files to store?

3) Is it true that no rsync like functionality is supported - meaning whole
file needs to be replicated? Or there is some bits delta replication

4) How well unity/DHT would work with several disks per server at once (as
discussed on another email, where the disks in server are separated one from
another, and not overlaid by any raid, jbod or lvm)?
What developers think about this and the question above?

5) How this whole setup could be managed in NUFA(?) approach, where each
server is a client as well? I mean, what is the advised approach to control
all the settings until the central configuration comes in version 1.5?

Thanks in advance.

More information about the Gluster-users mailing list