[Gluster-devel] distributed locking

Székelyi Szabolcs cc at avaxio.hu
Fri Nov 30 18:42:36 UTC 2007


Brian Taber wrote:
> I just ran lock tests I found in the QA section on 2 servers at the same
> time with 500 threads each, they all ran successfully.  I am currently
> running the TLA version of gluster, I don't know if that makes a
> difference...

So do I. Could you try the test in network mode? The source available
from the link on the GlusterFS QA page doesn't support network-based
tests (however, it does by documentation). There's an updated version at

http://nfsv4.bullopensource.org/tools/tests/page45.php

that can really do the trick.

After compiling, you run

locktests -f <file> \
          -n <number of processes per node> \
          -c <number of nodes - 1>

on one server and

locktests --server <servernode>

on the other(s). This will test locking the same file from two clients.

I would be interested in your results and your configuration, if you can
make them public.

Thanks,
--
cc

>> Brian Taber wrote:
>>> I have been running Dovecot and Postfix with a 2 brick setup and posix
>>> locking enabled on each brick for a while with no problems.  This is a
>>> Maildir setup...  Dovecot is setup to use fcntl for locking.... remember
>>> that gluster does not support flock as fuse does not support it...
>> Yes, I'm aware of this. But in your place, I would stop doing this until
>> the GlusterFS QA test runs successfully. If it doesn't, then not losing
>> data with such a setup is just a matter of luck. And for this storage
>> application, I can't accept the risk.
>>
>>>> On Fri, 30 Nov 2007, Székelyi Szabolcs wrote:
>>>>
>>>>> Everything works fine as long as I don't introduce locking, which is
>>>>> essential if one wishes to use the storage eg. as a backend for a mail
>>>>> server.
>>>> I'm sorry that I cannot answer your question, but do you have a choice
>>>> in
>>>> what mail server to use? Those that use the Maildir format (Like for
>>>> example Postfix together with Courier-IMAP) does not need locking.





More information about the Gluster-devel mailing list