[Gluster-users] Rolling upgrade from 3.6.3 to 3.10.5

Yong Tseng yongtw123 at gmail.com
Fri Aug 25 13:42:06 UTC 2017


Hi Diego,

That's valuable information to know. Thanks for the input!

On Fri, Aug 25, 2017 at 9:08 PM, Diego Remolina <dijuremo at gmail.com> wrote:

> Yes, I did an offline upgrade.
>
> 1. Stop all clients using gluster servers.
> 2. Stop glusterfsd and glusterd on both servers.
> 3. Backed up /var/lib/gluster* in all servers just to be safe.
> 4. Upgraded all servers from 3.6.x to 3.10.x (I did not have quotas or
> anything that required special steps)
> 5. Started gluster daemons again and confirmed everything was fine
> prior to letting clients connect.
> 5. Ran 3.10.x with the older op version for a few days to make sure
> all was OK (Not all was OK for me, but that may be a samba issue as I
> use it as a file server).
> 6. Upgraded the op version to maximum available.
>
> In my case, I have two servers with bricks and one server that acts as
> a witness.
>
> HTH,
>
> Diego
>
> On Fri, Aug 25, 2017 at 8:56 AM, Yong Tseng <yongtw123 at gmail.com> wrote:
> > Hi Diego,
> >
> > Just to clarify, so did you do an offline upgrade with an existing
> cluster
> > (3.6.x => 3.10.x)?
> > Thanks.
> >
> > On Fri, Aug 25, 2017 at 8:42 PM, Diego Remolina <dijuremo at gmail.com>
> wrote:
> >>
> >> I was never able to go from 3.6.x to 3.7.x without downtime. Then
> >> 3.7.x did not work well for me, so I stuck with 3.6.x until recently.
> >> I went from 3.6.x to 3.10.x but downtime was scheduled.
> >>
> >> Diego
> >>
> >> On Fri, Aug 25, 2017 at 8:25 AM, Yong Tseng <yongtw123 at gmail.com>
> wrote:
> >> > Hi Diego,
> >> >
> >> > Thanks for the information. I tried only setting 'allow-insecure on'
> but
> >> > nada.
> >> > The sentence "If you are using GlusterFS version 3.4.x or below, you
> can
> >> > upgrade it to following" in documentation is surely misleading.
> >> > So would you suggest creating a new 3.10 cluster from scratch then
> >> > rsync(?)
> >> > the data from old cluster to the new?
> >> >
> >> >
> >> > On Fri, Aug 25, 2017 at 7:53 PM, Diego Remolina <dijuremo at gmail.com>
> >> > wrote:
> >> >>
> >> >> You cannot do a rolling upgrade from 3.6.x to 3.10.x You will need
> >> >> downtime.
> >> >>
> >> >> Even 3.6 to 3.7 was not possible... see some references to it below:
> >> >>
> >> >> https://marc.info/?l=gluster-users&m=145136214452772&w=2
> >> >> https://gluster.readthedocs.io/en/latest/release-notes/3.7.1/
> >> >>
> >> >> # gluster volume set <volname> server.allow-insecure on Edit
> >> >> /etc/glusterfs/glusterd.vol to contain this line: option
> >> >> rpc-auth-allow-insecure on
> >> >>
> >> >> Post 1, restarting the volume would be necessary:
> >> >>
> >> >> # gluster volume stop <volname>
> >> >> # gluster volume start <volname>
> >> >>
> >> >>
> >> >> HTH,
> >> >>
> >> >> Diego
> >> >>
> >> >> On Fri, Aug 25, 2017 at 7:46 AM, Yong Tseng <yongtw123 at gmail.com>
> >> >> wrote:
> >> >> > Hi all,
> >> >> >
> >> >> > I'm currently in process of upgrading a replicated cluster (1 x 4)
> >> >> > from
> >> >> > 3.6.3 to 3.10.5. The nodes run CentOS 6. However after upgrading
> the
> >> >> > first
> >> >> > node, the said node fails to connect to other peers (as seen via
> >> >> > 'gluster
> >> >> > peer status'), but somehow other non-upgraded peers can still see
> the
> >> >> > upgraded peer as connected.
> >> >> >
> >> >> > Writes to the Gluster volume via local mounts of non-upgraded peers
> >> >> > are
> >> >> > replicated to the upgraded peer, but I can't write via the upgraded
> >> >> > peer
> >> >> > as
> >> >> > the local mount seems forced to read-only.
> >> >> >
> >> >> > Launching heal operations from non-upgraded peers will output
> 'Commit
> >> >> > failed
> >> >> > on <upgraded peer IP>. Please check log for details'.
> >> >> >
> >> >> > In addition, during upgrade process there were warning messages
> about
> >> >> > my
> >> >> > old
> >> >> > vol files renamed with .rpmsave extension. I tried starting Gluster
> >> >> > with
> >> >> > my
> >> >> > old vol files but the problem persisted. I tried generating new vol
> >> >> > files
> >> >> > with 'glusterd --xlator-option "*.upgrade=on" -N', still no avail.
> >> >> >
> >> >> > Also I checked the brick log it had several messages about "failed
> to
> >> >> > get
> >> >> > client opversion". I don't know if this is pertinent. Could it be
> >> >> > that
> >> >> > the
> >> >> > upgraded node cannot connect to older nodes but still can receive
> >> >> > instructions from them?
> >> >> >
> >> >> > Below are command outputs; some data are masked.
> >> >> > I'd provide more information if required.
> >> >> > Thanks in advance.
> >> >> >
> >> >> > ===> 'gluster volume status' ran on non-upgraded peers
> >> >> >
> >> >> > Status of volume: gsnfs
> >> >> > Gluster process                                         Port
> >> >> > Online
> >> >> > Pid
> >> >> >
> >> >> >
> >> >> > ------------------------------------------------------------
> ------------------
> >> >> > Brick gs-nfs01:/ftpdata                                 49154   Y
> >> >> > 2931
> >> >> > Brick gs-nfs02:/ftpdata                                 49152   Y
> >> >> > 29875
> >> >> > Brick gs-nfs03:/ftpdata                                 49153   Y
> >> >> > 6987
> >> >> > Brick gs-nfs04:/ftpdata                                 49153   Y
> >> >> > 24768
> >> >> > Self-heal Daemon on localhost                           N/A     Y
> >> >> > 2938
> >> >> > Self-heal Daemon on gs-nfs04                            N/A     Y
> >> >> > 24788
> >> >> > Self-heal Daemon on gs-nfs03                            N/A     Y
> >> >> > 7007
> >> >> > Self-heal Daemon on <IP>                      N/A     Y       29866
> >> >> >
> >> >> > Task Status of Volume gsnfs
> >> >> >
> >> >> >
> >> >> > ------------------------------------------------------------
> ------------------
> >> >> > There are no active volume tasks
> >> >> >
> >> >> >
> >> >> >
> >> >> > ===> 'gluster volume status' on upgraded peer
> >> >> >
> >> >> > Gluster process                             TCP Port  RDMA Port
> >> >> > Online
> >> >> > Pid
> >> >> >
> >> >> >
> >> >> > ------------------------------------------------------------
> ------------------
> >> >> > Brick gs-nfs02:/ftpdata                     49152     0          Y
> >> >> > 29875
> >> >> > Self-heal Daemon on localhost               N/A       N/A        Y
> >> >> > 29866
> >> >> >
> >> >> > Task Status of Volume gsnfs
> >> >> >
> >> >> >
> >> >> > ------------------------------------------------------------
> ------------------
> >> >> > There are no active volume tasks
> >> >> >
> >> >> >
> >> >> >
> >> >> > ===> 'gluster peer status' on non-upgraded peer
> >> >> >
> >> >> > Number of Peers: 3
> >> >> >
> >> >> > Hostname: gs-nfs03
> >> >> > Uuid: 4c1544e6-550d-481a-95af-2a1da32d10ad
> >> >> > State: Peer in Cluster (Connected)
> >> >> >
> >> >> > Hostname: <IP>
> >> >> > Uuid: 17d554fd-9181-4b53-9521-55acf69ac35f
> >> >> > State: Peer in Cluster (Connected)
> >> >> > Other names:
> >> >> > gs-nfs02
> >> >> >
> >> >> > Hostname: gs-nfs04
> >> >> > Uuid: c6d165e6-d222-414c-b57a-97c64f06c5e9
> >> >> > State: Peer in Cluster (Connected)
> >> >> >
> >> >> >
> >> >> >
> >> >> > ===> 'gluster peer status' on upgraded peer
> >> >> >
> >> >> > Number of Peers: 3
> >> >> >
> >> >> > Hostname: gs-nfs03
> >> >> > Uuid: 4c1544e6-550d-481a-95af-2a1da32d10ad
> >> >> > State: Peer in Cluster (Disconnected)
> >> >> >
> >> >> > Hostname: gs-nfs01
> >> >> > Uuid: 90d3ed27-61ac-4ad3-93a9-3c2b68f41ecf
> >> >> > State: Peer in Cluster (Disconnected)
> >> >> > Other names:
> >> >> > <IP>
> >> >> >
> >> >> > Hostname: gs-nfs04
> >> >> > Uuid: c6d165e6-d222-414c-b57a-97c64f06c5e9
> >> >> > State: Peer in Cluster (Disconnected)
> >> >> >
> >> >> >
> >> >> > --
> >> >> > - Yong
> >> >> >
> >> >> > _______________________________________________
> >> >> > Gluster-users mailing list
> >> >> > Gluster-users at gluster.org
> >> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > - Yong
> >
> >
> >
> >
> > --
> > - Yong
>



-- 
- Yong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170825/ca6fbe5d/attachment.html>


More information about the Gluster-users mailing list