<p dir="ltr">Hi Alex,</p>
<p dir="ltr">Did you setup LACP using links to both switches ?</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov<br>
</p>
<div class="quote">On Mar 22, 2019 18:42, Alex K <rightkicktech@gmail.com> wrote:<br type='attribution'><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Hi all,</div><div><br /></div><div>I had the opportunity to test the setup on actual hardware, as I managed to arrange for a downtime at customer. <br /></div><div><br /></div><div>The results were that, when cables were split between two switches, even though servers were able to ping each other, gluster was not able to start the volumes and the only relevant log I noticed was: <br /></div><div><br /></div><div><font size="1"><span style="font-family:monospace , monospace">[2019-03-21 14:16:15.043714] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: <b>Staging failed</b> on gluster2. Please check log file for details.<br />[2019-03-21 14:16:15.044034] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed on gluster2. Please check log file for details.<br />[2019-03-21 14:16:15.044292] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed on gluster2. Please check log file for details.<br />[2019-03-21 14:49:11.278724] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed on gluster2. Please check log file for details.<br />[2019-03-21 14:49:40.904596] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed on gluster1. Please check log file for details.</span></font></div><div><br /></div><div>Does anyone has any idea what does this staging error mean?</div><div>I don't have the hardware anymore available for testing and I will try to reproduce on virtual env. <br /></div><div><br /></div><div>Thanx</div><div>Alex<br /></div></div></div><br /><div class="elided-text"><div dir="ltr">On Mon, Mar 18, 2019 at 12:52 PM Alex K <<a href="mailto:rightkicktech@gmail.com">rightkicktech@gmail.com</a>> wrote:<br /></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb( 204 , 204 , 204 );padding-left:1ex"><div dir="ltr"><div>Performed some tests simulating the setup on OVS. <br /></div><div>When using mode 6 I had mixed results for both scenarios (see below): <br /></div><div><br /></div><div><div><img src="cid:ii_jte7vulc0" alt="image.png" width="566" height="388" /><br /></div></div><div><br /></div><div>There were times that hosts were not able to reach each other (simple
ping tests) and other time where hosts were able to reach each other
with ping but gluster volumes were down due to connectivity issues being
reported (endpoint is not connected). systemctl restart network usually
resolved the gluster connectivity issue. This was regardless of the
scenario (interlink or not). I will need to do some more tests.</div></div><br /><div class="elided-text"><div dir="ltr">On Tue, Feb 26, 2019 at 4:14 PM Alex K <<a href="mailto:rightkicktech@gmail.com">rightkicktech@gmail.com</a>> wrote:<br /></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb( 204 , 204 , 204 );padding-left:1ex"><div dir="ltr"><div dir="ltr"><br /><div><div>Thank you to all for your suggestions. <br /></div><div><br /></div><div>I
came here since only gluster was having issues to start. Ping and other
networking services were showing everything fine, so I guess there is
sth at gluster that does not like what I tried to do. <br /></div><div>Unfortunately
I have this system in production and I cannot experiment. It was a
customer request to add redundancy to the switch and I went with what I
assumed would work. <br /></div><div>I guess I have to have the switches stacked, but the current ones do not support this. They are just simple managed switches. <br /></div><div><br /></div><div>Multiple IPs per peers could be a solution. <br /></div><div>I will search a little more and in case I have sth I will get back. </div></div></div><br /><div class="elided-text"><div dir="ltr">On Tue, Feb 26, 2019 at 6:52 AM Strahil <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br /></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb( 204 , 204 , 204 );padding-left:1ex"><p dir="ltr">Hi Alex,</p>
<p dir="ltr">As per the following ( ttps://<a href="http://community.cisco.com/t5/switching/lacp-load-balancing-in-2-switches-part-of-3750-stack-switch/td-p/2268111">community.cisco.com/t5/switching/lacp-load-balancing-in-2-switches-part-of-3750-stack-switch/td-p/2268111</a> ) your switches need to be stacked in order to support lacp with your setup.<br />
Yet, I'm not sure if balance-alb will work with 2 separate switches - maybe some special configuration is needed ?!?<br />
As far as I know gluster can have multiple IPs matched to a single peer, but I'm not sure if having 2 separate networks will be used as active-backup or active-active.</p>
<p></p></blockquote></div></div></blockquote></div></blockquote></div></blockquote></div>