<div dir="ltr"><div>Hi Strahil,</div><div><br></div><div>Thanks for that. We do have one backup server specified, but will add the second backup as well.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 21 Dec 2019 at 11:26, Strahil <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><p dir="ltr">Hi David,</p>
<p dir="ltr">Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases).</p>
<p dir="ltr">In such way, when the primary is lost, your client can reach a backup one without disruption.</p>
<p dir="ltr">P.S.: Client may 'hang' - if the primary server got rebooted ungracefully - as the communication must timeout before FUSE addresses the next server. There is a special script for killing gluster processes in '/usr/share/gluster/scripts' which can be used for setting up a systemd service to do that for you on shutdown.</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
<div>On Dec 20, 2019 23:49, David Cunningham <<a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>> wrote:<br type="attribution"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Stahil,</div><div><br></div><div>Ah, that is an important point. One of the nodes is not accessible from the client, and we assumed that it only needed to reach the GFS node that was mounted so didn't think anything of it.</div><div><br></div><div>We will try making all nodes accessible, as well as "direct-io-mode=disable".</div><div><br></div><div>Thank you.</div><div><br></div></div><br><div><div dir="ltr">On Sat, 21 Dec 2019 at 10:29, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:"courier new","courier","monaco",monospace,sans-serif;font-size:16px"><div></div>
<div dir="ltr">Actually I haven't clarified myself.</div><div dir="ltr">FUSE mounts on the client side is connecting directly to all bricks consisted of the volume.</div><div dir="ltr">If for some reason (bad routing, firewall blocked) there could be cases where the client can reach 2 out of 3 bricks and this can constantly cause healing to happen (as one of the bricks is never updated) which will degrade the performance and cause excessive network usage.</div><div dir="ltr">As your attachment is from one of the gluster nodes, this could be the case.</div><div dir="ltr"><br></div><div dir="ltr">Best Regards,</div><div dir="ltr">Strahil Nikolov</div><div><br></div>
</div><div>
<div style="font-family:"helvetica neue","helvetica","arial",sans-serif;font-size:13px;color:rgb(38,40,42)">
<div>
В петък, 20 декември 2019 г., 01:49:56 ч. Гринуич+2, David Cunningham <<a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>> написа:
</div>
<div><br></div>
<div><br></div>
<div><div><div><div dir="ltr"><div>Hi Strahil,</div><div><br clear="none"></div><div>The chart attached to my original email is taken from the GFS server.</div><div><br clear="none"></div><div>I'm not sure what you mean by accessing all bricks simultaneously. We've mounted it from the client like this:</div><div>gfs1:/gvol0 /mnt/glusterfs/ glusterfs defaults,direct-io-mode=disable,_netdev,backupvolfile-server=gfs2,fetch-attempts=10 0 0</div><div><br clear="none"></div><div>Should we do something different to access all bricks simultaneously?</div><div><br clear="none"></div><div>Thanks for your help!</div><div><br clear="none"></div></div><br clear="none"><div><div><div dir="ltr">On Fri, 20 Dec 2019 at 11:47, Strahil Nikolov <<a shape="rect" href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>> wrote:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:"courier new","courier","monaco",monospace,sans-serif;font-size:16px"><div></div>
<div dir="ltr">I'm not sure if you did measure the traffic from client side (tcpdump on a client machine) or from Server side.</div><div dir="ltr"><br clear="none"></div><div dir="ltr">In both cases , please verify that the client accesses all bricks simultaneously, as this can cause unnecessary heals.</div><div dir="ltr"><br clear="none"></div><div dir="ltr">Have you thought about upgrading to v6? There are some enhancements in v6 which could be beneficial.</div><div dir="ltr"><br clear="none"></div><div dir="ltr">Yet, it is indeed strange that so much traffic is generated with FUSE.</div><div dir="ltr"><br clear="none"></div><div dir="ltr">Another aproach is to test with NFSGanesha which suports pNFS and can natively speak with Gluster, which cant bring you closer to the previous setup and also provide some extra performance.</div><div dir="ltr"><br clear="none"></div><div dir="ltr"><br clear="none"></div><div dir="ltr">Best Regards,</div><div dir="ltr">Strahil Nikolov</div><div dir="ltr"><br clear="none"></div><div dir="ltr"><br clear="none"></div><div><br clear="none"></div>
</div><div>
<div>
<div>
</div></div></div></div></blockquote></div></div></div></div></div></div></div></div></blockquote></div></blockquote></div></blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>David Cunningham, Voisonics Limited<br><a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>USA: +1 213 221 1092<br>New Zealand: +64 (0)28 2558 3782</div></div></div></div></div></div></div></div></div></div></div>