[Gluster-users] Gluster anc balance-alb

Alessandro Briosi ab1 at metalit.com
Tue Feb 14 09:33:50 UTC 2017


Hi all,
I'd like to have a clarification on bonding with gluster.

I have a gluster deployment which is using a bond with 4 eths.

The bond is configured with balance-alb as 2 are connected to 1 switch
and the other 2 to another switch.
This is for traffic balance and redundancy.

The switches are stacked with a 10Gbit cable. They are managed.

The same connection is used for server and client (the servers are also
client of themselfs).

For what I understand balance-alb balances single connections, so one
connection can get at max 1Gb speed.

It though seems that only 1 ethernet is mainly used.

This is the output for the interested ethernets (the same basically
applyes to the other servers)

bond2     Link encap:Ethernet  HWaddr 00:0a:f7:a5:ec:5c 
          inet addr:192.168.102.1  Bcast:192.168.102.255  Mask:255.255.255.0
          inet6 addr: fe80::20a:f7ff:fea5:ec5c/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:9000  Metric:1
          RX packets:195041678 errors:0 dropped:4795 overruns:0 frame:0
          TX packets:244194369 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:346742936782 (322.9 GiB)  TX bytes:1202018794556 (1.0
TiB)

eth4      Link encap:Ethernet  HWaddr 00:0a:f7:a5:ec:5c 
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:194076526 errors:0 dropped:0 overruns:0 frame:0
          TX packets:239094839 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:346669905046 (322.8 GiB)  TX bytes:1185779765214 (1.0
TiB)
          Interrupt:88

eth5      Link encap:Ethernet  HWaddr 00:0a:f7:a5:ec:5d 
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:317620 errors:0 dropped:1597 overruns:0 frame:0
          TX packets:3969287 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:21155944 (20.1 MiB)  TX bytes:16107271750 (15.0 GiB)
          Interrupt:84

eth6      Link encap:Ethernet  HWaddr 00:0a:f7:a5:ec:5e 
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:317620 errors:0 dropped:1596 overruns:0 frame:0
          TX packets:557634 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:21155972 (20.1 MiB)  TX bytes:35688576 (34.0 MiB)
          Interrupt:88

eth7      Link encap:Ethernet  HWaddr 00:0a:f7:a5:ec:5f 
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:317618 errors:0 dropped:1596 overruns:0 frame:0
          TX packets:557633 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:21155816 (20.1 MiB)  TX bytes:35688512 (34.0 MiB)
          Interrupt:84

#cat /proc/net/bonding/bond2

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth4
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:a5:ec:5c
Slave queue ID: 0

Slave Interface: eth5
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:a5:ec:5d
Slave queue ID: 0

Slave Interface: eth6
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:a5:ec:5e
Slave queue ID: 0

Slave Interface: eth7
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:a5:ec:5f
Slave queue ID: 0

Is this normal?

I could use LACP though it would require me to use 2 bonds (1 for each
switch), though I have no idea on how to configure a "failover".

Any hint would be appreciated.

Alessandro


More information about the Gluster-users mailing list