[Gluster-users] mysql replication between two nodes

Richard Shade rshade at rightscale.com
Fri Oct 15 19:33:34 UTC 2010


You need to look at the 3.1 release. It is supposed to get rid of this
problem.

On Fri, Oct 15, 2010 at 7:26 AM, Richard de Vries <rdevries1000 at gmail.com>wrote:

> The mysql database only runs on one node at a time.
> I still find it hard to understand why you need to restart the service
> if a brick goes down and comes back again.
>
> this is the volume file I'm using
>
>
> glusterfsd.vol server file:
>
>
> volume posix1
>  type storage/posix
>  option directory /export/database
> end-volume
>
> volume locks1
>  type features/locks
>  subvolumes posix1
> end-volume
>
> volume database
>  type performance/io-threads
>  option thread-count 8
>  subvolumes locks1
> end-volume
>
> volume server
>  type protocol/server
>  option transport-type tcp/server
>  option auth.addr.database.allow *
>  option transport.socket.listen-port 6996
>  option transport.socket.nodelay on
>  subvolumes database
> end-volume
>
>
>
>
> database.vol file:
>
> volume databasenode1
>  type protocol/client
>  option transport-type tcp
>  option transport.socket.nodelay on
>  option remote-port 6996
>  option ping-timeout 2
>  option remote-host node1
>  option remote-subvolume database
> end-volume
>
> volume databasenode2
>  type protocol/client
>  option transport-type tcp
>  option transport.socket.nodelay on
>  option remote-port 6996
>  option ping-timeout 2
>  option remote-host node2
>  option remote-subvolume database
> end-volume
>
> volume replicate
>  type cluster/replicate
>  subvolumes databasenode1 databasenode2
> end-volume
>
> volume stat-performance
>  type performance/stat-prefetch
>  subvolumes replicate
> end-volume
>
> Maybe the stat-performance translator has influence on the this stat
> output.
>
> I'll try to disable this and test again.
>
> Regards,
> Richard
>
> On Fri, Oct 15, 2010 at 12:44 PM, Deadpan110 <deadpan110 at gmail.com> wrote:
> > I am very new to this list, but here is my 2 cents...
> >
> > In the past I used DRBD between 2 nodes to provide a master/slave
> > setup with mySQL data stored on the filesystem.
> >
> > Upon a failover situation, mySQL would start on the remaining server
> > and pick up where things left off.
> >
> > DRBD (8.0 +) now supports master/master but it would be unwise to run
> > mySQL on such a setup live on 2 servers.
> >
> > mySQL has also advanced and replication is not restricted to
> master/slave.
> >
> > I use (and am loving) glusterfs in various guises on my 3 node cluster
> > for my client filesystems.
> >
> > For mySQL I use master/master/master circular replication without
> > depending on any type of clustered filesystem (only local on each
> > node) - there have been people frowning on such a setup, but things
> > have advanced with the latest stable mySQL versions and as such, I
> > have been successfully using it in a clustered environment.
> >
> > Martin
> >
> > On 15 October 2010 20:33, Richard de Vries <rdevries1000 at gmail.com>
> wrote:
> >> Hello Beat,
> >>
> >> This is a pitty. Because a stop of the service only to resync te
> >> standby node is not so nice...
> >>
> >> The stat of the database file in: /opt/test/database after a reboot of
> >> the node 2 shows different output,
> >> one time from the node 1 and another time from node 2.
> >>
> >> What is the role of self heal in this? It is noticed that the files
> >> are not equal (via stat).
> >>
> >> Would you see the same behaviour for example with qemu-kvm that keeps
> >> also files open?
> >>
> >> Regards,
> >> Richard
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



-- 
Thanks,

Richard Shade
Integration Engineer
RightScale - http://www.rightscale.com/
phone: 8055004164x1018


More information about the Gluster-users mailing list