<div dir="ltr"><span style="font-size:12.8px">Hi Diego,</span><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">Thanks for the information. I tried only setting 'allow-insecure on' but nada.</div><div style="font-size:12.8px">The sentence "If you are using GlusterFS version 3.4.x or below, you can upgrade it to following" in documentation is surely misleading.</div><div style="font-size:12.8px">So would you suggest creating a new 3.10 cluster from scratch then rsync(?) the data from old cluster to the new?</div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Aug 25, 2017 at 7:53 PM, Diego Remolina <span dir="ltr"><<a href="mailto:dijuremo@gmail.com" target="_blank">dijuremo@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">You cannot do a rolling upgrade from 3.6.x to 3.10.x You will need downtime.<br>
<br>
Even 3.6 to 3.7 was not possible... see some references to it below:<br>
<br>
<a href="https://marc.info/?l=gluster-users&m=145136214452772&w=2" rel="noreferrer" target="_blank">https://marc.info/?l=gluster-<wbr>users&m=145136214452772&w=2</a><br>
<a href="https://gluster.readthedocs.io/en/latest/release-notes/3.7.1/" rel="noreferrer" target="_blank">https://gluster.readthedocs.<wbr>io/en/latest/release-notes/3.<wbr>7.1/</a><br>
<br>
# gluster volume set <volname> server.allow-insecure on Edit<br>
/etc/glusterfs/glusterd.vol to contain this line: option<br>
rpc-auth-allow-insecure on<br>
<br>
Post 1, restarting the volume would be necessary:<br>
<br>
# gluster volume stop <volname><br>
# gluster volume start <volname><br>
<br>
<br>
HTH,<br>
<br>
Diego<br>
<div><div class="gmail-h5"><br>
On Fri, Aug 25, 2017 at 7:46 AM, Yong Tseng <<a href="mailto:yongtw123@gmail.com">yongtw123@gmail.com</a>> wrote:<br>
> Hi all,<br>
><br>
> I'm currently in process of upgrading a replicated cluster (1 x 4) from<br>
> 3.6.3 to 3.10.5. The nodes run CentOS 6. However after upgrading the first<br>
> node, the said node fails to connect to other peers (as seen via 'gluster<br>
> peer status'), but somehow other non-upgraded peers can still see the<br>
> upgraded peer as connected.<br>
><br>
> Writes to the Gluster volume via local mounts of non-upgraded peers are<br>
> replicated to the upgraded peer, but I can't write via the upgraded peer as<br>
> the local mount seems forced to read-only.<br>
><br>
> Launching heal operations from non-upgraded peers will output 'Commit failed<br>
> on <upgraded peer IP>. Please check log for details'.<br>
><br>
> In addition, during upgrade process there were warning messages about my old<br>
> vol files renamed with .rpmsave extension. I tried starting Gluster with my<br>
> old vol files but the problem persisted. I tried generating new vol files<br>
> with 'glusterd --xlator-option "*.upgrade=on" -N', still no avail.<br>
><br>
> Also I checked the brick log it had several messages about "failed to get<br>
> client opversion". I don't know if this is pertinent. Could it be that the<br>
> upgraded node cannot connect to older nodes but still can receive<br>
> instructions from them?<br>
><br>
> Below are command outputs; some data are masked.<br>
> I'd provide more information if required.<br>
> Thanks in advance.<br>
><br>
> ===> 'gluster volume status' ran on non-upgraded peers<br>
><br>
> Status of volume: gsnfs<br>
> Gluster process Port Online Pid<br>
> ------------------------------<wbr>------------------------------<wbr>------------------<br>
> Brick gs-nfs01:/ftpdata 49154 Y 2931<br>
> Brick gs-nfs02:/ftpdata 49152 Y<br>
> 29875<br>
> Brick gs-nfs03:/ftpdata 49153 Y 6987<br>
> Brick gs-nfs04:/ftpdata 49153 Y<br>
> 24768<br>
> Self-heal Daemon on localhost N/A Y 2938<br>
> Self-heal Daemon on gs-nfs04 N/A Y<br>
> 24788<br>
> Self-heal Daemon on gs-nfs03 N/A Y 7007<br>
> Self-heal Daemon on <IP> N/A Y 29866<br>
><br>
> Task Status of Volume gsnfs<br>
> ------------------------------<wbr>------------------------------<wbr>------------------<br>
> There are no active volume tasks<br>
><br>
><br>
><br>
> ===> 'gluster volume status' on upgraded peer<br>
><br>
> Gluster process TCP Port RDMA Port Online Pid<br>
> ------------------------------<wbr>------------------------------<wbr>------------------<br>
> Brick gs-nfs02:/ftpdata 49152 0 Y<br>
> 29875<br>
> Self-heal Daemon on localhost N/A N/A Y<br>
> 29866<br>
><br>
> Task Status of Volume gsnfs<br>
> ------------------------------<wbr>------------------------------<wbr>------------------<br>
> There are no active volume tasks<br>
><br>
><br>
><br>
> ===> 'gluster peer status' on non-upgraded peer<br>
><br>
> Number of Peers: 3<br>
><br>
> Hostname: gs-nfs03<br>
> Uuid: 4c1544e6-550d-481a-95af-<wbr>2a1da32d10ad<br>
> State: Peer in Cluster (Connected)<br>
><br>
> Hostname: <IP><br>
> Uuid: 17d554fd-9181-4b53-9521-<wbr>55acf69ac35f<br>
> State: Peer in Cluster (Connected)<br>
> Other names:<br>
> gs-nfs02<br>
><br>
> Hostname: gs-nfs04<br>
> Uuid: c6d165e6-d222-414c-b57a-<wbr>97c64f06c5e9<br>
> State: Peer in Cluster (Connected)<br>
><br>
><br>
><br>
> ===> 'gluster peer status' on upgraded peer<br>
><br>
> Number of Peers: 3<br>
><br>
> Hostname: gs-nfs03<br>
> Uuid: 4c1544e6-550d-481a-95af-<wbr>2a1da32d10ad<br>
> State: Peer in Cluster (Disconnected)<br>
><br>
> Hostname: gs-nfs01<br>
> Uuid: 90d3ed27-61ac-4ad3-93a9-<wbr>3c2b68f41ecf<br>
> State: Peer in Cluster (Disconnected)<br>
> Other names:<br>
> <IP><br>
><br>
> Hostname: gs-nfs04<br>
> Uuid: c6d165e6-d222-414c-b57a-<wbr>97c64f06c5e9<br>
> State: Peer in Cluster (Disconnected)<br>
><br>
><br>
> --<br>
> - Yong<br>
><br>
</div></div>> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">- Yong<br></div>
</div></div>