[Gluster-users] can't use sqlite3 on gluster mounted as NFS
Brandon Simmons
bsimmons at labarchives.com
Tue Sep 27 15:44:38 UTC 2011
On Sat, Sep 24, 2011 at 3:26 PM, Craig Carl <craig at gestas.net> wrote:
> Brandon -
> SQLite uses POSIX locking to implement some of its ACID compliant
> behavior and requires the filesystem to fully implement POSIX advisory
> locks. Most network filesystems (including Gluster native and NFS) don't
> support everything that SQLite needs and so using SQLite on a networked
> filesystem isn't recommend by the SQLite team, see this excerpt from the
> link I sent earlier -
> SQLite uses POSIX advisory locks to implement locking on Unix. On Windows it
> uses the LockFile(), LockFileEx(), and UnlockFile() system calls. SQLite
> assumes that these system calls all work as advertised. If that is not the
> case, then database corruption can result. One should note that POSIX
> advisory locking is known to be buggy or even unimplemented on many NFS
> implementations (including recent versions of Mac OS X) and that there are
> reports of locking problems for network filesystems under Windows. Your best
> defense is to not use SQLite for files on a network filesystem.
>
Since we've been using sqlite in a concurrent environment over NFS
fairly successfully for some time, I'm not really satisfied with the
hand-wavy answer. If gluster is fully posix-compliant then shouldn't
it work with sqlite? If not then in what specific way is gluster
deficient?
I may play with compiling sqlite to use an alternative locking style
as per this discussion:
http://stackoverflow.com/questions/5468349/sqlite-compatible-with-openafs-locking
Perhaps that will solve some issues.
Brandon
> Craig
> Sent from a mobile device, please excuse my tpyos.
> On Sep 24, 2011, at 0:19, Brandon Simmons <bsimmons at labarchives.com> wrote:
>
> On Fri, Sep 23, 2011 at 4:11 PM, Anand Babu Periasamy <ab at gluster.com>
> wrote:
>
> This is a known issue. Gluster NFS doesn't support NLM (locking) yet. 3.4
>
> may implement this. Did you try on GlusterFS native mount?
>
> Thanks for that information.
>
> I did test with the native fuse mount, but the results were difficult
> to interpret. We have a rails application that writes to multiple
> sqlite databases, and a test script that simulates a bunch of random
> writes to a specified DB, retrying if it fails.
>
> On NFS this test runs reasonably well: both clients take turns, a
> couple retries, all writes complete without failures.
>
> But mounted over gluster (same machines, underlying disk as above) one
> client always runs while the other gets locked out (different client
> machines depending on which was started first). At some point during
> this test the client that was locked out from writing to the DB
> actually gets disconnected from gluster and I have to remount:
>
> $ ls /mnt/gluster
> ls: cannot access /websites/: Transport endpoint is not connected
>
> One client is consistently locked out even if they are writing to
> DIFFERENT DBs altogether.
>
> The breakage of the mountpoint happened every time the test was run
> concurrently against the SAME DB, but did not seem to occur when
> clients were running against different DBs.
>
> But like I said, this was a very high level test with many moving
> parts so I'm not sure how useful the above details are for you to
> know.
>
> Happy to hear any ideas for testing,
> Brandon
>
> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log:
> [2011-09-16 19:32:38.122196] W
> [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
> reading from socket failed. Error (Transport endpoint is not
> connected), peer (127.0.0.1:1017)
>
>
> --AB
>
> On Sep 23, 2011 10:00 AM, "Brandon Simmons" <bsimmons at labarchives.com>
>
> wrote:
>
> I am able to successfully mount a gluster volume using the NFS client
>
> on my test servers. Simple reading and writing seems to work, but
>
> trying to work with sqlite databases seems to cause the sqlite client
>
> and libraries to freeze. I have to send KILL to stop the process.
>
> Here is an example, server 1 and 2 are clients mounting gluster volume
>
> over NFS:
>
> server1# echo "working" > /mnt/gluster/test_simple
>
> server2# echo "working" >> /mnt/gluster/test_simple
>
> server1# cat /mnt/gluster/test_simple
>
> working
>
> working
>
> server1# sqlite3 /websites/new.sqlite3
>
> SQLite version 3.6.10
>
> Enter ".help" for instructions
>
> Enter SQL statements terminated with a ";"
>
> sqlite> create table memos(text, priority INTEGER);
>
> (...hangs forever, have to detach screen and do kill -9)
>
> the gluster volume was created and NFS-mounted as per the instructions
>
> here:
>
>
> http://www.gluster.com/community/documentation/index.php/Gluster_3.2_Filesystem_Administration_Guide
>
> If I mount the volume using the nolock option, then things work:
>
> mount -t nfs -o nolock server:/test-vol /mnt/gluster
>
> So I assume this has something to do with the locking RPC service
>
> stufff, which I don't know much about. Here's output from rpc info:
>
> server# rpcinfo -p
>
> program vers proto port
>
> 100000 2 tcp 111 portmapper
>
> 100000 2 udp 111 portmapper
>
> 100024 1 udp 56286 status
>
> 100024 1 tcp 40356 status
>
> 100005 3 tcp 38465 mountd
>
> 100005 1 tcp 38466 mountd
>
> 100003 3 tcp 38467 nfs
>
>
> client1# rpcinfo -p server
>
> program vers proto port
>
> 100000 2 tcp 111 portmapper
>
> 100000 2 udp 111 portmapper
>
> 100024 1 udp 56286 status
>
> 100024 1 tcp 40356 status
>
> 100005 3 tcp 38465 mountd
>
> 100005 1 tcp 38466 mountd
>
> 100003 3 tcp 38467 nfs
>
> client1# # rpcinfo -p
>
> program vers proto port
>
> 100000 2 tcp 111 portmapper
>
> 100000 2 udp 111 portmapper
>
> 100024 1 udp 32768 status
>
> 100024 1 tcp 58368 status
>
> Thanks for any help,
>
> Brandon
>
> _______________________________________________
>
> Gluster-users mailing list
>
> Gluster-users at gluster.org
>
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list