[Gluster-devel] Performance problems in our web server setup

Hans Einar Gautun einar.gautun at statkart.no
Wed Jul 25 12:06:40 UTC 2007


On ons, 2007-07-25 at 14:25 +0530, Anand Avati wrote:
> Hans,
> 
>         You can bundle several tcp connections outside of glusterfs,
>         and choose
>         between different type of loadbalance and failover setup. This
>         is done
>         directly in the OS.
> 
> Can you please elaborate?
> 
> thanks,
> avati
>  

Of course!

Let's say: You have 2 Gigabit NIC on the server (You can use as many as
you like, and mix different brands and chipsset)
Task: More throughput / Failover / Both
Solution: Bonding the NIC's in one fat bondX device. 
Example: One of our servers, this is the output of ifconfig:

bond0     Link encap:Ethernet  HWaddr 00:30:48:88:66:0E
          inet addr:159.162.84.8  Bcast:159.162.84.255
Mask:255.255.255.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:882207013 errors:0 dropped:37 overruns:0 frame:13
          TX packets:2133653812 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3690441678 (3.4 GiB)  TX bytes:282774 (276.1 KiB)

eth0      Link encap:Ethernet  HWaddr 00:30:48:88:66:0F
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:728847196 errors:0 dropped:37 overruns:0 frame:13
          TX packets:1964318832 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4201174916 (3.9 GiB)  TX bytes:3417094326 (3.1 GiB)
          Interrupt:169

eth1      Link encap:Ethernet  HWaddr 00:30:48:88:66:0E
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:153359817 errors:0 dropped:0 overruns:0 frame:0
          TX packets:169334980 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3784234058 (3.5 GiB)  TX bytes:878155744 (837.4 MiB)
          Interrupt:177

The corresponding output from /proc/net/bonding/bond0 shows:

Ethernet Channel Bonding Driver: v3.0.3 (March 23, 2006)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200

Slave Interface: eth0
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:30:48:88:66:0e

Slave Interface: eth1
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:30:48:88:66:0f


You need bonding module in kernel, and a corresponding(to the kernel)
ifenslave binary - e.i you get the right ifenslave through apt-get if
you use the ubuntu kernel package. When compiling vanilla kernel you
have the ifenslave in kernel source
under /path_to_kernel_source/Documentation/networking/, and the
installation is this command:
gcc -Wall -O -I/path_to_kernel_source/include \
/path_to_kernel_source/Documentation/networking/ifenslave.c \
-o /sbin/ifenslave
 
A link for the setup on Ubuntu:
http://www.howtoforge.com/network_bonding_ubuntu_6.10

A general good and informativ link with information on the different
modes you can use:
http://linux-net.osdl.org/index.php/Bonding

I have been using this for some time, and will do even more heavily for
the time to come. An example I'm going to use is: A dual Giga NIC on the
clients, and a quadruple + 2 Giga onboard NIC( 6 NIC in one bond0) on a
new fileserver running glusterfs ;) 

Then you can use 2 switches, and split the NIC's in one half on each
switch for redundancy and so on....


Regards,

-- 
Einar Gautun                           einar.gautun at statkart.no

Statens kartverk            | Norwegian Mapping Authority
3507 Hønefoss               |    NO-3507 Hønefoss, Norway

Ph +47 32118372   Fax +47 32118101       Mob +47 92692662





More information about the Gluster-devel mailing list