[Gluster-users] write-behind / write-back caching for replicated storage

Christian gluster at ml.pinet.de
Fri Aug 26 09:40:41 UTC 2011


Hello to all,

I'm currently testing glusterfs ( version 3.1.4, 3.1.6, 3.2.2, 3.2.3 and 3.3beta ) for the following 
situation / behavior:
I want to create a replicated storage via internet / wan with two storage nodes.
The first node is located in office A and the other one is in office B.
If I try to write a file to the mounted glusterfs (mounted via glusterfs or nfs), the write 
performance is as poor as the upload speed (~ 1 mbit - adjusted manually using "tc").
I tested several cache-options (see below) with the following effect:
The copy process of a file is done very fast (~40 mbyte/sec), but the application (rsync, mc copy, 
cp) is waiting at 100% for the final sync of the storage. The process is not finished before 
glusterfs has written the file to the 2nd node.
The behavior I am looking for is to store files locally first and then sync the content to the 
second node in the background.
Is there a way for this?

******************************************************************
volume info:
	Volume Name: gl5
	Type: Replicate
	Status: Started
	Number of Bricks: 2
	Transport-type: tcp
	Bricks:
	Brick1: 192.168.42.130:/gl5
	Brick2: 192.168.42.7:/gl5
	Options Reconfigured:
	nfs.disable: off
	nfs.trusted-sync: on
	nfs.trusted-write: on
	performance.flush-behind: off
	performance.write-behind-window-size: 200MB
	performance.cache-max-file-size: 200MB
******************************************************************
tested mount options:
	mount.nfs 127.0.0.1:gl5 /mnt/gluster/ -v -o mountproto=tcp -o async
	mount -t glusterfs 127.0.0.1:gl5 /mnt/gluster -o async


Thanks a lot,

Christian



More information about the Gluster-users mailing list