[Gluster-devel] close() blocks until flush-behind finishes
Harald Stürzebecher
haralds at cs.tu-berlin.de
Sun Oct 9 22:39:33 UTC 2011
Hello!
2011/10/10 Paul van Tilburg <paul at luon.net>:
> Hello again,
>
> On Thu, Sep 15, 2011 at 10:53:35AM +0530, Raghavendra G wrote:
>> The option flush-behind makes only the flush call as background. However it
>> waits for all the writes to complete, so that it can return the application
>> errors (if any) happened while syncing them to server. [...]
>
> Ok, I understand the behavior now, close() returns when the writes to
> all (replicating) servers are complete. I would like to sketch our
> desired setup/situation. Maybe it is something that is already possible
> but we haven't thought of the right solution, or we could work towards it.
>
> We have a client machine and a server/master machine that is connected
> to the client machine via a relatively low-bandwidth line. To prevent
> noticing this low bandwidth on the client-side, we thought of writing
> data fast locally, and getting the data to the server in a flush-behind
> fashion. However, the blocking behavior of close() currently gets in
> the way performance-wise.
>
> Our idea was to have a gluster server with a brick on the client that
> can be fully trusted, and a replicating gluster server with a brick on
> the master. When we write, close() returns once the local client
> gluster server has received all the data and client-side write errors
> can thus still be reported. If flushing to the replacing server fails
> thereafter for whatever reason, self-healing can be applied.
>
> Is this kind of low-bandwidth robust setup already possible? If not,
> are there any pointers to where we could add/improve things?
If the server is only used as a backup for the files on the client and
not to provide simultaneous write access to the files:
http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Managing_GlusterFS_Geo-replication
It uses rsync to copy the files to the slave volume, AFAIK.
Kind regards,
Harald
More information about the Gluster-devel
mailing list