[Gluster-devel] Client side afr, locking, race condition, simultanous writes, out of sync

Martin Fick mogulguy at yahoo.com
Wed May 7 02:40:02 UTC 2008


--- Brandon Lamb <brandonlamb at gmail.com> wrote:
> So a simple 2 server, 2 client, client side afr
> setup.
> 
> The clients at the SAME time do:
> 
> client1 # echo "one" > file.txt
> client2 # echo "two" > file.txt
> 
> Are the threads regarding this and the conclusion
> at this point saying that this is safe or not? Are
> we going to end up with server1 having "one" and 
> server2 having "two" or vice versa or the
> chance of, because there is no "locking "mechanism"
> to know that two writes are happening
> at the same time?

This is NOT safe, this is exactly what my stress
script does.  It fails almost 1 out of 10 times.  Try
it yourself, it would be nice to have another report
confirming this in case I am somehow misinterpreting
the results.

> Should one or the other happen first, I would think
> either two would be written on both and then one,
> or the other way around? Are we saying this is not
> true?  

Sometimes you will get "two" on one subvolume and
"one" on the other subvolume.  If you use the -a "one"
-b "two" options to "stress" you can even echo exactly
those strings.  When I run it with those strings, I
get slightly worse than 1 in 10, almost 1 in 7.

The good news is that I have not been able to repeat
this for either of the following two server side AFR
setups: 1) both processes writting to one mounted
client or 2) each process writting to a separate mount
point on the same host.

However I was able to make it happen less than once in
about 1000 tries on server side AFR with the iothreads
translator just above AFR (with two mount points,
never with one yet).

How does iothreads work above AFR on server side, does
it effectively mean that there are 4 AFR threads below
iothreads then?

-Martin



      ____________________________________________________________________________________
Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ





More information about the Gluster-devel mailing list