[Gluster-users] Gluster Failover Mount

James purpleidea at gmail.com
Fri Jan 17 11:58:17 UTC 2014


On Fri, Jan 17, 2014 at 6:46 AM,  <Mike.Peters at opengi.co.uk> wrote:
> Hi,
>
> I am currently testing GlusterFS and am looking for some advice. My setup uses the latest 3.4.2 packages from www.gluster.org  SLES11-SP3.
>
> I currently have a storage pool shared read-write across 2 gluster server nodes. This seems to work fine. However, I would also like to mount this pool on 4 further client machines running a legacy web application. Because of some limitations in the application, I would like to be able to tell these servers to mount the storage pool from one particular gluster server node, but to fail over to the second node if and only the first node becomes unavailable. I can mount the storage on the client nodes with both gluster nodes specified or with only one node specified but cannot see a way in the documentation of preferring one particular node and having the second node configured as a fail over. Is this possible? What am I missing?

You do realize that the initial connection is just for retrieving the
volfiles, and then all hosts are used, right? If so, carrying on:

You can use VRRP and a VIP to specify which host to mount from. An
example of this is done in my Puppet-Gluster setup:
https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/

You can specify ordering of the VIP with priority arguments to
keepalived for example.

You can also specify more than one server on the mount command for
glusterfs. I forget the syntax for that, but it's easy to google.

I hope this answers your questions!

James



 My current gluster vol file on my clients is as follows:
>
> volume remote1
>   type protocol/client
>   option transport-type tcp
>   option remote-host GLUSTER-01
>   option remote-subvolume /data/gv0/brick1
> end-volume
>
> volume remote2
>   type protocol/client
>   option transport-type tcp
>   option remote-host GLUSTER-02
>   option remote-subvolume /data/gv0/brick1
> end-volume
>
> volume replicate
>   type cluster/replicate
>   subvolumes remote1 remote2
> end-volume
>
> volume writebehind
>   type performance/write-behind
>   option window-size 1MB
>   subvolumes replicate
> end-volume
>
> volume cache
>   type performance/io-cache
>   option cache-size 512MB
>   subvolumes writebehind
> end-volume
>
> And I have the following in the client's fstab:
>
> /etc/glusterfs/glusterfs.vol /opt/shared_files glusterfs rw,allow_other,default_permissions,max_read=131072 0 0
>
>
> Thanks in advance for any help,
>
> Mike Peters
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list