<div dir="ltr">Hi Diego,<div><br></div><div>That's valuable information to know. Thanks for the input!</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Aug 25, 2017 at 9:08 PM, Diego Remolina <span dir="ltr"><<a href="mailto:dijuremo@gmail.com" target="_blank">dijuremo@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Yes, I did an offline upgrade.<br>
<br>
1. Stop all clients using gluster servers.<br>
2. Stop glusterfsd and glusterd on both servers.<br>
3. Backed up /var/lib/gluster* in all servers just to be safe.<br>
4. Upgraded all servers from 3.6.x to 3.10.x (I did not have quotas or<br>
anything that required special steps)<br>
5. Started gluster daemons again and confirmed everything was fine<br>
prior to letting clients connect.<br>
5. Ran 3.10.x with the older op version for a few days to make sure<br>
all was OK (Not all was OK for me, but that may be a samba issue as I<br>
use it as a file server).<br>
6. Upgraded the op version to maximum available.<br>
<br>
In my case, I have two servers with bricks and one server that acts as<br>
a witness.<br>
<br>
HTH,<br>
<br>
Diego<br>
<div class="HOEnZb"><div class="h5"><br>
On Fri, Aug 25, 2017 at 8:56 AM, Yong Tseng <<a href="mailto:yongtw123@gmail.com">yongtw123@gmail.com</a>> wrote:<br>
> Hi Diego,<br>
><br>
> Just to clarify, so did you do an offline upgrade with an existing cluster<br>
> (3.6.x => 3.10.x)?<br>
> Thanks.<br>
><br>
> On Fri, Aug 25, 2017 at 8:42 PM, Diego Remolina <<a href="mailto:dijuremo@gmail.com">dijuremo@gmail.com</a>> wrote:<br>
>><br>
>> I was never able to go from 3.6.x to 3.7.x without downtime. Then<br>
>> 3.7.x did not work well for me, so I stuck with 3.6.x until recently.<br>
>> I went from 3.6.x to 3.10.x but downtime was scheduled.<br>
>><br>
>> Diego<br>
>><br>
>> On Fri, Aug 25, 2017 at 8:25 AM, Yong Tseng <<a href="mailto:yongtw123@gmail.com">yongtw123@gmail.com</a>> wrote:<br>
>> > Hi Diego,<br>
>> ><br>
>> > Thanks for the information. I tried only setting 'allow-insecure on' but<br>
>> > nada.<br>
>> > The sentence "If you are using GlusterFS version 3.4.x or below, you can<br>
>> > upgrade it to following" in documentation is surely misleading.<br>
>> > So would you suggest creating a new 3.10 cluster from scratch then<br>
>> > rsync(?)<br>
>> > the data from old cluster to the new?<br>
>> ><br>
>> ><br>
>> > On Fri, Aug 25, 2017 at 7:53 PM, Diego Remolina <<a href="mailto:dijuremo@gmail.com">dijuremo@gmail.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> You cannot do a rolling upgrade from 3.6.x to 3.10.x You will need<br>
>> >> downtime.<br>
>> >><br>
>> >> Even 3.6 to 3.7 was not possible... see some references to it below:<br>
>> >><br>
>> >> <a href="https://marc.info/?l=gluster-users&m=145136214452772&w=2" rel="noreferrer" target="_blank">https://marc.info/?l=gluster-<wbr>users&m=145136214452772&w=2</a><br>
>> >> <a href="https://gluster.readthedocs.io/en/latest/release-notes/3.7.1/" rel="noreferrer" target="_blank">https://gluster.readthedocs.<wbr>io/en/latest/release-notes/3.<wbr>7.1/</a><br>
>> >><br>
>> >> # gluster volume set <volname> server.allow-insecure on Edit<br>
>> >> /etc/glusterfs/glusterd.vol to contain this line: option<br>
>> >> rpc-auth-allow-insecure on<br>
>> >><br>
>> >> Post 1, restarting the volume would be necessary:<br>
>> >><br>
>> >> # gluster volume stop <volname><br>
>> >> # gluster volume start <volname><br>
>> >><br>
>> >><br>
>> >> HTH,<br>
>> >><br>
>> >> Diego<br>
>> >><br>
>> >> On Fri, Aug 25, 2017 at 7:46 AM, Yong Tseng <<a href="mailto:yongtw123@gmail.com">yongtw123@gmail.com</a>><br>
>> >> wrote:<br>
>> >> > Hi all,<br>
>> >> ><br>
>> >> > I'm currently in process of upgrading a replicated cluster (1 x 4)<br>
>> >> > from<br>
>> >> > 3.6.3 to 3.10.5. The nodes run CentOS 6. However after upgrading the<br>
>> >> > first<br>
>> >> > node, the said node fails to connect to other peers (as seen via<br>
>> >> > 'gluster<br>
>> >> > peer status'), but somehow other non-upgraded peers can still see the<br>
>> >> > upgraded peer as connected.<br>
>> >> ><br>
>> >> > Writes to the Gluster volume via local mounts of non-upgraded peers<br>
>> >> > are<br>
>> >> > replicated to the upgraded peer, but I can't write via the upgraded<br>
>> >> > peer<br>
>> >> > as<br>
>> >> > the local mount seems forced to read-only.<br>
>> >> ><br>
>> >> > Launching heal operations from non-upgraded peers will output 'Commit<br>
>> >> > failed<br>
>> >> > on <upgraded peer IP>. Please check log for details'.<br>
>> >> ><br>
>> >> > In addition, during upgrade process there were warning messages about<br>
>> >> > my<br>
>> >> > old<br>
>> >> > vol files renamed with .rpmsave extension. I tried starting Gluster<br>
>> >> > with<br>
>> >> > my<br>
>> >> > old vol files but the problem persisted. I tried generating new vol<br>
>> >> > files<br>
>> >> > with 'glusterd --xlator-option "*.upgrade=on" -N', still no avail.<br>
>> >> ><br>
>> >> > Also I checked the brick log it had several messages about "failed to<br>
>> >> > get<br>
>> >> > client opversion". I don't know if this is pertinent. Could it be<br>
>> >> > that<br>
>> >> > the<br>
>> >> > upgraded node cannot connect to older nodes but still can receive<br>
>> >> > instructions from them?<br>
>> >> ><br>
>> >> > Below are command outputs; some data are masked.<br>
>> >> > I'd provide more information if required.<br>
>> >> > Thanks in advance.<br>
>> >> ><br>
>> >> > ===> 'gluster volume status' ran on non-upgraded peers<br>
>> >> ><br>
>> >> > Status of volume: gsnfs<br>
>> >> > Gluster process Port<br>
>> >> > Online<br>
>> >> > Pid<br>
>> >> ><br>
>> >> ><br>
>> >> > ------------------------------<wbr>------------------------------<wbr>------------------<br>
>> >> > Brick gs-nfs01:/ftpdata 49154 Y<br>
>> >> > 2931<br>
>> >> > Brick gs-nfs02:/ftpdata 49152 Y<br>
>> >> > 29875<br>
>> >> > Brick gs-nfs03:/ftpdata 49153 Y<br>
>> >> > 6987<br>
>> >> > Brick gs-nfs04:/ftpdata 49153 Y<br>
>> >> > 24768<br>
>> >> > Self-heal Daemon on localhost N/A Y<br>
>> >> > 2938<br>
>> >> > Self-heal Daemon on gs-nfs04 N/A Y<br>
>> >> > 24788<br>
>> >> > Self-heal Daemon on gs-nfs03 N/A Y<br>
>> >> > 7007<br>
>> >> > Self-heal Daemon on <IP> N/A Y 29866<br>
>> >> ><br>
>> >> > Task Status of Volume gsnfs<br>
>> >> ><br>
>> >> ><br>
>> >> > ------------------------------<wbr>------------------------------<wbr>------------------<br>
>> >> > There are no active volume tasks<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > ===> 'gluster volume status' on upgraded peer<br>
>> >> ><br>
>> >> > Gluster process TCP Port RDMA Port<br>
>> >> > Online<br>
>> >> > Pid<br>
>> >> ><br>
>> >> ><br>
>> >> > ------------------------------<wbr>------------------------------<wbr>------------------<br>
>> >> > Brick gs-nfs02:/ftpdata 49152 0 Y<br>
>> >> > 29875<br>
>> >> > Self-heal Daemon on localhost N/A N/A Y<br>
>> >> > 29866<br>
>> >> ><br>
>> >> > Task Status of Volume gsnfs<br>
>> >> ><br>
>> >> ><br>
>> >> > ------------------------------<wbr>------------------------------<wbr>------------------<br>
>> >> > There are no active volume tasks<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > ===> 'gluster peer status' on non-upgraded peer<br>
>> >> ><br>
>> >> > Number of Peers: 3<br>
>> >> ><br>
>> >> > Hostname: gs-nfs03<br>
>> >> > Uuid: 4c1544e6-550d-481a-95af-<wbr>2a1da32d10ad<br>
>> >> > State: Peer in Cluster (Connected)<br>
>> >> ><br>
>> >> > Hostname: <IP><br>
>> >> > Uuid: 17d554fd-9181-4b53-9521-<wbr>55acf69ac35f<br>
>> >> > State: Peer in Cluster (Connected)<br>
>> >> > Other names:<br>
>> >> > gs-nfs02<br>
>> >> ><br>
>> >> > Hostname: gs-nfs04<br>
>> >> > Uuid: c6d165e6-d222-414c-b57a-<wbr>97c64f06c5e9<br>
>> >> > State: Peer in Cluster (Connected)<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > ===> 'gluster peer status' on upgraded peer<br>
>> >> ><br>
>> >> > Number of Peers: 3<br>
>> >> ><br>
>> >> > Hostname: gs-nfs03<br>
>> >> > Uuid: 4c1544e6-550d-481a-95af-<wbr>2a1da32d10ad<br>
>> >> > State: Peer in Cluster (Disconnected)<br>
>> >> ><br>
>> >> > Hostname: gs-nfs01<br>
>> >> > Uuid: 90d3ed27-61ac-4ad3-93a9-<wbr>3c2b68f41ecf<br>
>> >> > State: Peer in Cluster (Disconnected)<br>
>> >> > Other names:<br>
>> >> > <IP><br>
>> >> ><br>
>> >> > Hostname: gs-nfs04<br>
>> >> > Uuid: c6d165e6-d222-414c-b57a-<wbr>97c64f06c5e9<br>
>> >> > State: Peer in Cluster (Disconnected)<br>
>> >> ><br>
>> >> ><br>
>> >> > --<br>
>> >> > - Yong<br>
>> >> ><br>
>> >> > ______________________________<wbr>_________________<br>
>> >> > Gluster-users mailing list<br>
>> >> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > --<br>
>> > - Yong<br>
><br>
><br>
><br>
><br>
> --<br>
> - Yong<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">- Yong<br></div>
</div>