[Gluster-users] glusterfs as centralized and high-availability storage for virtualization

Chad ccolumbu at hotmail.com
Fri Mar 19 18:56:40 UTC 2010


I believe Arian is reporting the same behavior and issues that we have been talking about before.
1. A way to rebuild a gluster server off-line so it does not affect clients.
2. The 3-5 second delay for fail over that breaks clients.

So I guess add his vote to the list :)

^C



admin at proxydienst.de wrote:
> hello.
> 
> I am using glusterfs 3.0.3. and I think it is a realy cool project. 
> thanks a lot to all developers.
> 
> my idea was to use glusterfs for a centralized and highavailability 
> backend-storage for virtual xen-domus. but I had different errors in my 
> tests. what I have done:
> 
> 2x server as glusterfs-server - called server[1,2]
> 1x server as clusterfs-client - called client1
> 
> example gluster-config on the servers:
> ---------------------
> volume brick1
>    type storage/posix
>    option directory /test
> end-volume
> 
> # posix locks
> volume brick1-locks
>   type features/posix-locks
>   subvolumes brick1
> end-volume
> 
> volume brick2
>    type protocol/client
>    option transport-type tcp/client
>    option remote-host 192.168.11.214   # IP address of server2
>    option remote-subvolume brick1-locks   # use brick1 on server2
> 
> end-volume
> 
> volume replicate
>    type cluster/replicate
>    subvolumes brick2 brick1-locks
> end-volume
> 
> volume server
>    type protocol/server
>    option transport-type tcp/server
>    subvolumes brick1-locks replicate
>    option auth.ip.brick1-locks.allow 192.168.11*,127.0.0.1
>    option auth.ip.replicate.allow 192.168.11*,127.0.0.1
> end-volume
> ---------------------
> 
> 
> 
> 
> on client1
> * I run xen 3.4.1
> * I have "mounted" glusterfs from server[1,2] -> in this mount-point 
> there is my xen-image
> 
> 
> so my idea is following:
> if server1 has a problem or has to be shutdown -> then the xen-image has 
> to run on server2 without interruption and errors
> if server2 has a problem or has to be shutdown -> then the xen-image has 
> to run on server1 without interruption and errors
> 
> 
> 
> so I tried both - server-side-replication and client-side-replication.
> here are my test-results:
> 
> 
> client-side-replication
> ---------------------
> * I shut down (killed) server1 -> the xen-domu runs further on server2 
> without any interruption
> * after server1 comes up again -> self-healing takes places -> the 
> virtual xen-domu is not reachable until self-healing ends
> 
> 
> server-side-replication
> ---------------------
> with server-side-replication I tried 2 different configuration. one with 
> HA and one with round robin dns -> both have the same result.
> -> HA: the client has not mounted the glusterfs-mount via the real 
> address of server1 or server2 -> but it has mounted it via a 
> HA-IP-adress between server1 and server2 -> so if one of the servers has 
> a problem, the HA-IP will be switched
> -> rrdns: like explained here: 
> http://gluster.com/community/documentation/index.php/High-availability_storage_using_server-side_AFR 
> 
> 
> the result of both possibilitys:
> * I shut down (killed) server1 -> the xen-domu runs further on server2 
> but there are many errors -> I have to stop the xen-domu and start it 
> again -> then everything seems to be fine
> * after server1 comes up again -> self-healing takes places -> the 
> virtual xen-domu is not reachable until self-healing ends (same problem 
> as with the client-side-replication)
> 
> 
> 
> 
> so my question is:
> is it possible to have a centralized, high availablity glusterfs storage 
> for virtual images?
> if so, what am I doing wrong?
> if not so, why?
> 
> 
> if I forgot something to mention or you need another config-file, please 
> be so kind and tell me. I will give it to you.
> 
> 
> thank you very very much!!
> 
> 
> arian
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> 



More information about the Gluster-users mailing list