[Gluster-users] Gluster and bonding
Jim Kinney
jim.kinney at gmail.com
Mon Feb 25 12:17:30 UTC 2019
Unless the link between the two switches is set as a dedicated management link, won't that link create a problem? On the dual switch setup I have, there's a dedicated connection that handles inter-switch data. I'm not using bonding or teaming at the servers as I have 40Gb ethernet nics. Gluster is fine across this.
On February 25, 2019 5:43:24 AM EST, Alex K <rightkicktech at gmail.com> wrote:
>Hi All,
>
>I was asking if it is possible to have the two separate cables
>connected to
>two different physical switched. When trying mode6 or mode1 in this
>setup
>gluster was refusing to start the volumes, giving me "transport
>endpoint is
>not connected".
>
>server1: cable1 ---------------- switch1 --------------------- server2:
>cable1
> |
>server1: cable2 ---------------- switch2 --------------------- server2:
>cable2
>
>Both switches are connected with each other also. This is done to
>achieve
>redundancy for the switches.
>When disconnecting cable2 from both servers, then gluster was happy.
>What could be the problem?
>
>Thanx,
>Alex
>
>
>On Mon, Feb 25, 2019 at 11:32 AM Jorick Astrego <jorick at netbulae.eu>
>wrote:
>
>> Hi,
>>
>> We use bonding mode 6 (balance-alb) for GlusterFS traffic
>>
>>
>>
>https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/network4
>>
>> Preferred bonding mode for Red Hat Gluster Storage client is mode 6
>> (balance-alb), this allows client to transmit writes in parallel on
>> separate NICs much of the time.
>>
>> Regards,
>>
>> Jorick Astrego
>> On 2/25/19 5:41 AM, Dmitry Melekhov wrote:
>>
>> 23.02.2019 19:54, Alex K пишет:
>>
>> Hi all,
>>
>> I have a replica 3 setup where each server was configured with a dual
>> interfaces in mode 6 bonding. All cables were connected to one common
>> network switch.
>>
>> To add redundancy to the switch, and avoid being a single point of
>> failure, I connected each second cable of each server to a second
>switch.
>> This turned out to not function as gluster was refusing to start the
>volume
>> logging "transport endpoint is disconnected" although all nodes were
>able
>> to reach each other (ping) in the storage network. I switched the
>mode to
>> mode 1 (active/passive) and initially it worked but following a
>reboot of
>> all cluster same issue appeared. Gluster is not starting the volumes.
>>
>> Isn't active/passive supposed to work like that? Can one have such
>> redundant network setup or are there any other recommended
>approaches?
>>
>>
>> Yes, we use lacp, I guess this is mode 4 ( we use teamd ), it is, no
>> doubt, best way.
>>
>>
>> Thanx,
>> Alex
>>
>> _______________________________________________
>> Gluster-users mailing
>listGluster-users at gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing
>listGluster-users at gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>>
>> Met vriendelijke groet, With kind regards,
>>
>> Jorick Astrego
>>
>> *Netbulae Virtualization Experts *
>> ------------------------------
>> Tel: 053 20 30 270 info at netbulae.eu Staalsteden 4-3A KvK 08198180
>> Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW
>NL821234584B01
>> ------------------------------
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190225/42b20077/attachment.html>
More information about the Gluster-users
mailing list