<div dir="auto">Hi,<div dir="auto"><br></div><div dir="auto">I understand something now: </div><div dir="auto">I think glusterd should not be restarted on all nodes at once. And if this true, can anyone provide technical explanation of how it effects the checksum?</div><div dir="auto">And it seems to fix the rejected hosts, I need to clear the /var/lib/glusterd except <a href="http://gluster.info">gluster.info</a> and start glusterd and peer probe again.</div><div dir="auto"><br></div><div dir="auto">Regards,</div><div dir="auto">Jeevan।</div><div dir="auto"><br></div><div dir="auto"><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Sun, Nov 25, 2018, 4:25 PM Jeevan Patnaik <<a href="mailto:g1patnaik@gmail.com">g1patnaik@gmail.com</a> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto">Hi,<div dir="auto"><br></div><div dir="auto">I have different issues:<br><div dir="auto"><br></div><div dir="auto">I have restarted glusterd service on my 72 nodes almost parallelly with ansible while the gluster NFS clients are in mounted state</div><div dir="auto"><br></div><div dir="auto">After that many of the gluster peers went to rejected state. In logs, I see msg id 106010 stating that checksum doesn't match.</div><div dir="auto"><br></div><div dir="auto">I'm confused which checksum is that and how is it changed after I restart.</div><div dir="auto"><br></div><div dir="auto">I restarted because gluster volume status commands gives timeout. I have tiering enabled on the volume and was trying to detach. And that too never completed. The status shows only in progress even the tiered volume contains only a few 100 8MB filese I created for testing.</div><div dir="auto"><br></div><div dir="auto">my overall experience with gluster tiering is really bad :(</div><div dir="auto"><br></div><div dir="auto">Besides, what's the best way to restore old state if something goes wrong?</div><div dir="auto">Till now, I have been using no volfile at all.. I only use volume status commands to configure my cluster. Do I need to use a volfile inorder to restore something?<br></div><div dir="auto"><br></div><div dir="auto">Gluster version is 3.12.15</div><div dir="auto">I have checked the op Version on all nodes and they all are same. </div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"> Regards</div><div dir="auto">Jeevan।</div><div dir="auto"><br></div></div></div>
</blockquote></div>