[Gluster-users] Brick offline after upgrade

David Cunningham dcunningham at voisonics.com
Tue Mar 30 01:12:33 UTC 2021


Thank you Strahil. So if we take into account the depreciated options from
all release notes then the direct upgrade should be okay.


On Fri, 26 Mar 2021 at 02:01, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:

> Hey David,
>
> usually it should work directly, but please take into consideration all
> release notes (usually '.0) as some options are deprecated like tiering.
>
> Best Regards,
> Strahil Nikolov
>
> On Tue, Mar 23, 2021 at 2:47, David Cunningham
> <dcunningham at voisonics.com> wrote:
> Hello,
>
> We ended up restoring the backup since it was easy on a test system.
>
> Does anyone know if you need to upgrade multiple major versions
> sequentially, or can you jump to the highest version? For example, to go
> from GlusterFS 5 to 8 can you upgrade to 8 directly, or must you do 6 and 7
> in between?
>
> Thanks in advance.
>
>
> On Sat, 20 Mar 2021 at 09:58, David Cunningham <dcunningham at voisonics.com>
> wrote:
>
> Hi Strahil,
>
> It's as follows. Do you see anything unusual? Thanks.
>
> root at caes8:~# ls -al /var/lib/glusterd/vols/gvol0/
> total 52
> drwxr-xr-x 3 root root 4096 Mar 18 17:06 .
> drwxr-xr-x 3 root root 4096 Jul 17  2018 ..
> drwxr-xr-x 2 root root 4096 Mar 18 17:06 bricks
> -rw------- 1 root root   16 Mar 18 17:06 cksum
> -rw------- 1 root root 3848 Mar 18 16:52
> gvol0.caes8.nodirectwritedata-gluster-gvol0.vol
> -rw------- 1 root root 2270 Feb 14  2020 gvol0.gfproxyd.vol
> -rw------- 1 root root 1715 Mar 18 16:52 gvol0.tcp-fuse.vol
> -rw------- 1 root root  729 Mar 18 17:06 info
> -rw------- 1 root root    0 Feb 14  2020 marker.tstamp
> -rw------- 1 root root  168 Mar 18 17:06 node_state.info
> -rw------- 1 root root   18 Mar 18 17:06 quota.cksum
> -rw------- 1 root root    0 Jul 17  2018 quota.conf
> -rw------- 1 root root   13 Mar 18 17:06 snapd.info
> -rw------- 1 root root 1829 Mar 18 16:52 trusted-gvol0.tcp-fuse.vol
> -rw------- 1 root root  896 Feb 14  2020 trusted-gvol0.tcp-gfproxy-fuse.vol
>
>
> On Fri, 19 Mar 2021 at 17:51, Strahil Nikolov <hunter86_bg at yahoo.com>
> wrote:
>
> [2021-03-18 23:52:52.084754] E [MSGID: 101019] [xlator.c:715:xlator_init] 0-gvol0-server: Initialization of volume 'gvol0-server' failed, review your volfile again
>
> What is the content of :
>
> /var/lib/glusterd/vols/gvol0 ?
>
>
> Best Regards,
>
> Strahil Nikolov
>
> On Fri, Mar 19, 2021 at 3:02, David Cunningham
> <dcunningham at voisonics.com> wrote:
> Hello,
>
> We have a single node/brick GlusterFS test system which unfortunately had
> GlusterFS upgraded from version 5 to 6 while the GlusterFS processes were
> still running. I know this is not what the "Generic Upgrade procedure"
> recommends.
>
> Following a restart the brick is not online, and we can't see any error
> message explaining exactly why. Would anyone have an idea of where to look?
>
> Since the logs from the time of the upgrade and reboot are a bit lengthy
> I've attached them in a text file.
>
> Thank you in advance for any advice!
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
>
>
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
>
>

-- 
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210330/a11c8255/attachment.html>


More information about the Gluster-users mailing list