[Gluster-users] Gluster 3.2.5 nic bond problem

Homer Li 01jay.ly at gmail.com
Wed Dec 7 04:13:15 UTC 2011


Hi All;

    I create a distributed gluster volumes by 4 server.
    Some of server create nic bond, bond mode is Adaptive load
balancing,  when I run benchmark from client to server.  All clients
nic bandwidth in eth0.  I ifdown eth0, all bandwidth could move to
eth1.
   When I remove HP brick(signle 10Gb port), Nic bond could load balancing.

   All client use native mode mount gluster server. cmd is "mount -t
glusterfs  HP-server-ip:/test-volume /mnt -o noatime"

   Now ,I add HP brick again , and I change the bond mode to IEEE
802.3ad Dynamic link aggregation. bond could load balancing too.

    1 x HP 380 G6, nic: 1 x NetXen NX3031 , Dual port 10Gb, only
single 10Gb port up(eth0).
    3 x Dell R410, nic: 2 x Broadcom BCM5716, I create bond0 , bond
mode is Adaptive load balancing.


Server OS:  Scientific Linux release 6.1 (Carbon) 2.6.32-131.0.15.el6.x86_64
# rpm -qa | grep gluster
glusterfs-fuse-3.2.5-2.el6.x86_64
glusterfs-core-3.2.5-2.el6.x86_64
glusterfs-geo-replication-3.2.5-2.el6.x86_64



Client OS: CentOS release 5.6 (Final) 2.6.18-238.19.1.el5 x86_64
# rpm -qa | grep gluster
glusterfs-core-3.2.5-1
glusterfs-fuse-3.2.5-1
glusterfs-geo-replication-3.2.5-1


benchmark software is iozone 3.397.

# gluster volume info

Volume Name: test-volume
Type: Distribute
Status: Started
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: 10.7.60.247:/export/data1
Brick2: 10.7.60.247:/export/data2
Brick3: 10.7.60.104:/gluster104
Brick4: 10.7.60.117:/gluster117
Brick5: 10.7.60.119:/gluster119
Options Reconfigured:


Anybody know,  Thanks very much.




Best Regards
Homer Li



More information about the Gluster-users mailing list