<div dir="ltr"><div dir="ltr"><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><br></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Dec 27, 2021 at 8:32 PM Ingo Fischer <<a href="mailto:ingo@fischer-ka.de">ingo@fischer-ka.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
in former releases like 9.x there was alwys a "major issues" section in <br>
the release notes like <br>
<a href="https://docs.gluster.org/en/latest/release-notes/9.0/#major-issues" rel="noreferrer" target="_blank">https://docs.gluster.org/en/latest/release-notes/9.0/#major-issues</a> <br>
8especially in ".0" releases. Should this not be added to such a section <br>
in the 10.0 release notes to inform users that want to upgrade?<br></blockquote><div><br></div><div>There should be, I think it's just that no such major issues were found earlier which could have been added there.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Ingo<br>
<br>
Am 27.12.21 um 14:03 schrieb Nikhil Ladha:<br>
> Hi Michael<br>
> <br>
> I think you are hitting a similar issue like this one <br>
> <a href="https://github.com/gluster/glusterfs/issues/3066" rel="noreferrer" target="_blank">https://github.com/gluster/glusterfs/issues/3066</a> <br>
> <<a href="https://github.com/gluster/glusterfs/issues/3066" rel="noreferrer" target="_blank">https://github.com/gluster/glusterfs/issues/3066</a>>.<br>
> If so, the fix for the same is under review and could be available in <br>
> the next release.<br>
> <br>
> --<br>
> Thanks and Regards,<br>
> *NiKHIL LADHA*<br>
> <br>
> <br>
> On Mon, Dec 27, 2021 at 6:25 PM Michael Böhm <<a href="mailto:dudleyperkins@gmail.com" target="_blank">dudleyperkins@gmail.com</a> <br>
> <mailto:<a href="mailto:dudleyperkins@gmail.com" target="_blank">dudleyperkins@gmail.com</a>>> wrote:<br>
> <br>
>     Hey guys,<br>
> <br>
>     i have a problem upgrading our nodes from 8.3 to 10.0 - i just<br>
>     upgraded the first node and run into "the cksums mismatch" problem.<br>
>     On the upgraded v10 node the checksums for all volumes are different<br>
>     than on the other v8 nodes. That leads to the node starting in a<br>
>     peer rejected state. I can only resolve this by following the<br>
>     actions supposed here:<br>
>     <a href="https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Resolving%20Peer%20Rejected/" rel="noreferrer" target="_blank">https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Resolving%20Peer%20Rejected/</a><br>
>     <<a href="https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Resolving%20Peer%20Rejected/" rel="noreferrer" target="_blank">https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Resolving%20Peer%20Rejected/</a>><br>
>     (stopping glusterd, deleting /var/lib/glusterd/* (except<br>
>     <a href="http://glusterd.info" rel="noreferrer" target="_blank">glusterd.info</a> <<a href="http://glusterd.info" rel="noreferrer" target="_blank">http://glusterd.info</a>>), start glusterd, probe a v8<br>
>     peer, restart glusterd again)<br>
> <br>
>     The cluster seems healthy again, self-healing is started and<br>
>     everything looks fine - only the newly created cksums are still<br>
>     different than on the other nodes. That means this healthy<br>
>     state only lasts till i reboot the node - where it all begins from<br>
>     the start - the nodes comes up as peer rejected.<br>
> <br>
>     Now i'v read about the problem here:<br>
>     <a href="https://github.com/gluster/glusterfs/issues/1332" rel="noreferrer" target="_blank">https://github.com/gluster/glusterfs/issues/1332</a><br>
>     <<a href="https://github.com/gluster/glusterfs/issues/1332" rel="noreferrer" target="_blank">https://github.com/gluster/glusterfs/issues/1332</a>> (even though that<br>
>     describes the problem should only occur when upgrading from earlier<br>
>     than v7)<br>
>     or also here on the mailing list:<br>
>     <a href="https://lists.gluster.org/pipermail/gluster-users/2021-November/039679.html" rel="noreferrer" target="_blank">https://lists.gluster.org/pipermail/gluster-users/2021-November/039679.html</a><br>
>     <<a href="https://lists.gluster.org/pipermail/gluster-users/2021-November/039679.html" rel="noreferrer" target="_blank">https://lists.gluster.org/pipermail/gluster-users/2021-November/039679.html</a>><br>
>     (i think i have the same problem, but unfortunately no solution<br>
>     given here)<br>
> <br>
>     Solutions seem to require upgrading all nodes and the problem should<br>
>     be resolved when finally upgrading op.version - but i dont' think<br>
>     this approach can be done online, and there's not really a way for<br>
>     me to do this offline.<br>
> <br>
>     Why is this happening now and not when i upgraded from pre7 to 7?<br>
>     All my nodes are 8.3 and op.version is 8000.<br>
> <br>
>     One thing i might have done "wrong" - as i upgraded to v8 i didn't<br>
>     set "gluster volume set <volname> fips-mode-rchecksum on" on the<br>
>     volumes, i think i just overlooked it in the docs. I have this<br>
>     option only set on 2 volumes i created after upgrading to v8. But<br>
>     even on those 2 the cksums differ, so i guess it wouldn' help alot<br>
>     if i set the option on all other volumes?<br>
> <br>
>     I really don't know what to do now, i kinda understand the problem<br>
>     but don't know why this is happening on a overall v8 cluster. I<br>
>     can't take all 9 nodes down, upgrade all to v10 and rely on "it's<br>
>     all good" with the final upgrade of op.version.<br>
> <br>
>     Can someone point me in a safe direction?<br>
> <br>
>     Regards<br>
> <br>
>     Mika<br>
> <br>
> <br>
>     ________<br>
> <br>
> <br>
> <br>
>     Community Meeting Calendar:<br>
> <br>
>     Schedule -<br>
>     Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
>     Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
>     <<a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a>><br>
>     Gluster-users mailing list<br>
>     <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>><br>
>     <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>     <<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a>><br>
> <br>
> <br>
> ________<br>
> <br>
> <br>
> <br>
> Community Meeting Calendar:<br>
> <br>
> Schedule -<br>
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
> Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
</blockquote></div></div>