[Gluster-users] Re: about HA infrastructure for hypervisors

Nicolas Sebrecht nsebrecht at piing.fr
Thu Jun 28 09:25:20 UTC 2012


The 27/06/12, Nathan Stratton wrote:

> What is considered half-decent? I have a 8 cluster
> distribute+replicate setup and I am getting about 65MB/s and about
> 1.5K IOPS. Considering that I am only using a single two disk SAS
> strip in each host I think that is not bad.

Hum, looking at your at your latter mail I would have expect better
performance, too.

> Also check out oVirt, it integrates with Gluster and provides HA.

I already know the existance of oVirt. I'll take a further look at it.

> >>2. We still didn't decide what physical network to choose between FC, FCoE
> >>and Infiniband.
> >
> >Have you ruled out 10G ethernet? If so, why?
> 
> I agree, we went all 10GBase-T.

We excluded ethernet due to searches on the web. It appeared that
ethernet has bad latency.

> >>3. Would it be better to split the Glusterfs namespace into two gluster
> >>volumes (one for each hypervisor), each running on a Glusterfs server
> >>(for the normal case where all servers are running)?
> >
> >I don't see how that would help - I expect you would mount both volumes on
> >both KVM nodes anyway, to allow you to do live migration.
> 
> Yep

In the usual case, correct me if I'm wrong, the traffic from all the
Glusterfs client (KVM hosts, here) goes to the same Glusterfs server:

  KVM 1 <----> Glusterfs server A
  KVM 2 <----> Glusterfs server A

If I split out the cluster in two parts, I would expect to be able to
distribute the network traffic like this:

  KVM 1 <----> Glusterfs server A
  KVM 2 <----> Glusterfs server B

And so, having better performance while still beeing HA-compliant.

In this splitted mode I think it would be possible to:
- handle hypervisors workload (mostly processor) by live-migrating VM.
- handle Glusterfs server workload (mostly disks IO) by exporting a VM
  from one side and importing it in the other Glusterfs server.

-- 
Nicolas Sebrecht



More information about the Gluster-users mailing list