[Gluster-users] gluster and multipath

Lindsay Mathieson lindsay.mathieson at gmail.com
Tue Jan 24 11:09:22 UTC 2017


On 24/01/2017 6:33 PM, Alessandro Briosi wrote:
> I'm in the process of creating a 3 server cluster, and use gluster as a
> shared storage between the 3.

Exactly what I run - my three gluster nodes are also VM Servers (Proxmox 
cluster);


> I have 2 switches and each server has a 4 ethernet card which I'd like
> to dedicate to the storage.
>
> For redundancy I thought I could use multipath with gluster (like with
> iscsi), but am not sure it can be done.


I don't think so and there isn't really a need for it. Each node in a 
gluster cluster is an active server, there is no SPOF. A gluster client 
(fuse or gfapi) when connecting to the cluster will download the list of 
all servers. If the server it is connected to dies, it will failover to 
another server. I have done this many times with rolling live upgrades. 
Additionally you can specify a list of servers for the initial connection.


> So the question is:
> can I use dm-multipath with gluster

Probably not.

> If not should I use nic bonding?

Yes, balance-alb is recommenced. With three servers 2 dedicated nics per 
server is optimal, I doubt you would get much benefit from 3 or 4 nics 
except redundancy. With 2*1G nics I get a reliable 120 MB/s seq writes.

I experimented with balance-rr and got somewhat erratic results.

> Is there a way to have it use 2 bonded interfaces (so if 1 switch goes
> down, the other takes up or better use both for maximal throughput)?

I'm pretty sure you could bond 4 nics with 2 through 1 switch and 2 
through the others. That should keep working if a switch goes down.

> Which then is multipath:)
> I could also use something like keepalived for the master IP to switch
> between the interfaces. though I'd like multipath more.
>

No need for either.


Cheers,

-- 
Lindsay Mathieson



More information about the Gluster-users mailing list