[Gluster-users] GlusterFS and MySQL Innodb Locking Issue
Anand Avati
anand.avati at gmail.com
Sat Feb 5 00:23:22 UTC 2011
Locking is more robust in the 3.1.x releases. Please upgrade.
Avati
On Fri, Feb 4, 2011 at 12:13 PM, Ken S. <shawing at gmail.com> wrote:
> I'm having some problems getting two nodes to mount a shared gluster
> volume where I have the MySQL data files stored. The databases are
> Innodb. Creating the volume on the master server works fine and it
> mounts, and when I mount that on the first mysql node it works fine,
> too. However, when I try to mount it with the second node I get this
> error:
>
> InnoDB: Unable to lock ./ibdata1, error: 11
> InnoDB: Check that you do not already have another mysqld process
> InnoDB: using the same InnoDB data or log files.
>
> This obviously is some sort of locking issue.
>
> I've found a few posts where people have said to change "locks" to
> "plocks" which I have tried without success. Also saw a post where
> someone said "plocks" was just a symlink to "locks".
>
> My confusion is that I don't know if this is an issue with the way the
> gluster volume is mounted or if it is a limitation with mysql. If
> anyone is successfully doing this, I would appreciate a gentle nudge
> in the right direction.
>
> Here are some details:
>
> Ubuntu 10.10 (Rackspace Cloud virtual server)
> root at node1:~# dpkg -l | grep -i gluster
> ii glusterfs-client 3.0.4-1
> clustered file-system (client package)
> ii glusterfs-server 3.0.4-1
> clustered file-system (server package)
> ii libglusterfs0 3.0.4-1
> GlusterFS libraries and translator modules
> root at node1:~# dpkg -l | grep -i fuse
> ii fuse-utils 2.8.4-1ubuntu1
> Filesystem in USErspace (utilities)
> ii libfuse2 2.8.4-1ubuntu1
> Filesystem in USErspace library
> root at node1:~#
>
> root at node1:~# cat /etc/glusterfs/glusterfsd.vol
> volume posix
> type storage/posix # POSIX FS translator
> option directory /export/apache # Export this directory
> end-volume
> volume locks
> type features/locks
> subvolumes posix
> end-volume
> volume brick
> type performance/io-threads
> option thread-count 8
> subvolumes locks
> end-volume
> # Configuration for the mysql server volume
> volume posix-mysql
> type storage/posix
> option directory /export/mysql
> option background-unlink yes
> end-volume
> volume locks-mysql
> type features/posix-locks
> #type features/locks
> # option mandatory-locks on # [2011-01-28 20:47:12] W
> [posix.c:1477:init] locks-mysql: mandatory locks not supported in this
> minor release.
> subvolumes posix-mysql
> end-volume
> volume brick-mysql
> type performance/io-threads
> option thread-count 8
> subvolumes locks-mysql
> end-volume
>
> ### Add network serving capability to above brick.
> volume server
> type protocol/server
> option transport-type tcp
> subvolumes brick brick-mysql
> option auth.addr.brick.allow aa.bb.cc.190,aa.bb.cc.23 # Allow access
> to "brick" volume
> option auth.addr.brick-mysql.allow aa.bb.cc.23,aa.bb.cc.51
> option auth.login.brick-mysql.allow user-mysql
> option auth.login.user-mysql.password *********
> end-volume
> root at node1:~#
>
> Thanks for any help you can give.
> -ken
> --
> Have a nice day ... unless you've made other plans.
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list