<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Il 24/01/2017 12:09, Lindsay Mathieson
ha scritto:<br>
</div>
<blockquote
cite="mid:32e02017-b36a-c874-f098-5c8e3c74b41d@gmail.com"
type="cite">On 24/01/2017 6:33 PM, Alessandro Briosi wrote:
<br>
<blockquote type="cite" style="color: #000000;">I'm in the process
of creating a 3 server cluster, and use gluster as a
<br>
shared storage between the 3.
<br>
</blockquote>
<br>
Exactly what I run - my three gluster nodes are also VM Servers
(Proxmox cluster);
<br>
<br>
<br>
</blockquote>
Ok, I also am going to use Proxmox. Any advise on how to configure
the bricks?<br>
I plan to have a 2 node replica. Would appreciate you sharing your
full setup :-)<br>
<br>
<blockquote
cite="mid:32e02017-b36a-c874-f098-5c8e3c74b41d@gmail.com"
type="cite">
<blockquote type="cite" style="color: #000000;">I have 2 switches
and each server has a 4 ethernet card which I'd like
<br>
to dedicate to the storage.
<br>
<br>
For redundancy I thought I could use multipath with gluster
(like with
<br>
iscsi), but am not sure it can be done.
<br>
</blockquote>
<br>
<br>
I don't think so and there isn't really a need for it. Each node
in a gluster cluster is an active server, there is no SPOF. A
gluster client (fuse or gfapi) when connecting to the cluster will
download the list of all servers. If the server it is connected to
dies, it will failover to another server. I have done this many
times with rolling live upgrades. Additionally you can specify a
list of servers for the initial connection.
<br>
<br>
<br>
</blockquote>
Ok, the only thing I want to avoid is if the switch goes down. This
is a SPOF.<br>
Having 2 switches would allow me to maintain 1 switch while let the
other handle the cluster.<br>
<br>
<blockquote
cite="mid:32e02017-b36a-c874-f098-5c8e3c74b41d@gmail.com"
type="cite">
<blockquote type="cite" style="color: #000000;">So the question
is:
<br>
can I use dm-multipath with gluster
<br>
</blockquote>
<br>
Probably not.
<br>
<br>
<blockquote type="cite" style="color: #000000;">If not should I
use nic bonding?
<br>
</blockquote>
<br>
Yes, balance-alb is recommenced. With three servers 2 dedicated
nics per server is optimal, I doubt you would get much benefit
from 3 or 4 nics except redundancy. With 2*1G nics I get a
reliable 120 MB/s seq writes.
<br>
</blockquote>
Ok so having 2 bonds 1 attached to each switch would work. Though I
still cannot get how to make gluster use both links (or at least one
with active/passive).<br>
Should I work on RRDNS and keepalived? Or use some bonding of a bond
within the 2 switches with balance-rr in this case?<br>
How do other implement this?<br>
<br>
<blockquote
cite="mid:32e02017-b36a-c874-f098-5c8e3c74b41d@gmail.com"
type="cite">
<br>
I experimented with balance-rr and got somewhat erratic results.
<br>
<br>
<blockquote type="cite" style="color: #000000;">Is there a way to
have it use 2 bonded interfaces (so if 1 switch goes
<br>
down, the other takes up or better use both for maximal
throughput)?
<br>
</blockquote>
<br>
I'm pretty sure you could bond 4 nics with 2 through 1 switch and
2 through the others. That should keep working if a switch goes
down.
<br>
<br>
</blockquote>
Well I was going to use LACP which seems to be the best once
configured, needs switch support, but that's not a problem.<br>
<br>
Thanks.<br>
Alessandro<br>
<div class="moz-signature">
</div>
</body>
</html>