[Gluster-users] concurrent writes not all being written

Andrus, Brian Contractor bdandrus at nps.edu
Sun Dec 13 22:14:59 UTC 2015


All,

I have a small gluster filesystem on 3 nodes.
I have a perl program that multi-threads and each thread writes it's output to one of 3 files depending on some results.

My trouble is that I am seeing missing lines from the output.
The input is a file of 500 lines. Depending on the line, it would be written to one of three files, but when I total the lines put out, I am missing anywhere from 4 to 8 lines.

This is even the case if I use an input file that should all go to a single file.

BUT... when I have it write to /tmp or /dev/shm, all of the lines expected are there.
This leads me to think there is something not happy with gluster and concurrent writes.

Here is the code for the actual write:

    flock(GOOD_FILES, LOCK_EX) or die $!;
    seek(GOOD_FILES, 0, SEEK_END) or die $!;
    print GOOD_FILES $lines_to_process[$tid-1] ."\n";
    flock(GOOD_FILES, LOCK_UN) or die $!;

So I would expect the proper file locking is taking place.
Is it possible that gluster is not writing because of a race condition?

Any insight as to where to look for a solution is appreciated.


Brian Andrus
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151213/7ddd795c/attachment.html>


More information about the Gluster-users mailing list