[Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP
Artem Russakovskii
archon810 at gmail.com
Wed Mar 20 19:57:59 UTC 2019
Amar,
I see debuginfo packages now and have installed them. I'm available via
Skype as before, just ping me there.
Sincerely,
Artem
--
Founder, Android Police <http://www.androidpolice.com>, APK Mirror
<http://www.apkmirror.com/>, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
<https://plus.google.com/+ArtemRussakovskii> | @ArtemR
<http://twitter.com/ArtemR>
On Tue, Mar 19, 2019 at 10:46 PM Amar Tumballi Suryanarayan <
atumball at redhat.com> wrote:
>
>
> On Wed, Mar 20, 2019 at 9:52 AM Artem Russakovskii <archon810 at gmail.com>
> wrote:
>
>> Can I roll back performance.write-behind: off and lru-limit=0 then? I'm
>> waiting for the debug packages to be available for OpenSUSE, then I can
>> help Amar with another debug session.
>>
>>
> Yes, the write-behind issue is now fixed. You can enable write-behind.
> Also remove lru-limit=0, so you can also utilize the benefit of garbage
> collection introduced in 5.4
>
> Lets get to fixing the problem once the debuginfo packages are available.
>
>
>
>> In the meantime, have you had time to set up 1x4 replicate testing? I was
>> told you were only testing 1x3, and it's the 4th brick that may be causing
>> the crash, which is consistent with this whole time only 1 of 4 bricks
>> constantly crashing. The other 3 have been rock solid. I'm hoping you
>> could
>> find the issue without a debug session this way.
>>
>>
> That is my gut feeling still. Added a basic test case with 4 bricks,
> https://review.gluster.org/#/c/glusterfs/+/22328/. But I think this
> particular issue is happening only on certain pattern of access for 1x4
> setup. Lets get to the root of it once we have debuginfo packages for Suse
> builds.
>
> -Amar
>
> Sincerely,
>> Artem
>>
>> --
>> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
>> <http://www.apkmirror.com/>, Illogical Robot LLC
>> beerpla.net | +ArtemRussakovskii
>> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
>> <http://twitter.com/ArtemR>
>>
>>
>> On Tue, Mar 19, 2019 at 8:27 PM Nithya Balachandran <nbalacha at redhat.com>
>> wrote:
>>
>> > Hi Artem,
>> >
>> > I think you are running into a different crash. The ones reported which
>> > were prevented by turning off write-behind are now fixed.
>> > We will need to look into the one you are seeing to see why it is
>> > happening.
>> >
>> > Regards,
>> > Nithya
>> >
>> >
>> > On Tue, 19 Mar 2019 at 20:25, Artem Russakovskii <archon810 at gmail.com>
>> > wrote:
>> >
>> >> The flood is indeed fixed for us on 5.5. However, the crashes are not.
>> >>
>> >> Sincerely,
>> >> Artem
>> >>
>> >> --
>> >> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
>> >> <http://www.apkmirror.com/>, Illogical Robot LLC
>> >> beerpla.net | +ArtemRussakovskii
>> >> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
>> >> <http://twitter.com/ArtemR>
>> >>
>> >>
>> >> On Mon, Mar 18, 2019 at 5:41 AM Hu Bert <revirii at googlemail.com>
>> wrote:
>> >>
>> >>> Hi Amar,
>> >>>
>> >>> if you refer to this bug:
>> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1674225 : in the test
>> >>> setup i haven't seen those entries, while copying & deleting a few GBs
>> >>> of data. For a final statement we have to wait until i updated our
>> >>> live gluster servers - could take place on tuesday or wednesday.
>> >>>
>> >>> Maybe other users can do an update to 5.4 as well and report back
>> here.
>> >>>
>> >>>
>> >>> Hubert
>> >>>
>> >>>
>> >>>
>> >>> Am Mo., 18. März 2019 um 11:36 Uhr schrieb Amar Tumballi Suryanarayan
>> >>> <atumball at redhat.com>:
>> >>> >
>> >>> > Hi Hu Bert,
>> >>> >
>> >>> > Appreciate the feedback. Also are the other boiling issues related
>> to
>> >>> logs fixed now?
>> >>> >
>> >>> > -Amar
>> >>> >
>> >>> > On Mon, Mar 18, 2019 at 3:54 PM Hu Bert <revirii at googlemail.com>
>> >>> wrote:
>> >>> >>
>> >>> >> update: upgrade from 5.3 -> 5.5 in a replicate 3 test setup with 2
>> >>> >> volumes done. In 'gluster peer status' the peers stay connected
>> during
>> >>> >> the upgrade, no 'peer rejected' messages. No cksum mismatches in
>> the
>> >>> >> logs. Looks good :-)
>> >>> >>
>> >>> >> Am Mo., 18. März 2019 um 09:54 Uhr schrieb Hu Bert <
>> >>> revirii at googlemail.com>:
>> >>> >> >
>> >>> >> > Good morning :-)
>> >>> >> >
>> >>> >> > for debian the packages are there:
>> >>> >> >
>> >>>
>> https://download.gluster.org/pub/gluster/glusterfs/5/5.5/Debian/stretch/amd64/apt/pool/main/g/glusterfs/
>> >>> >> >
>> >>> >> > I'll do an upgrade of a test installation 5.3 -> 5.5 and see if
>> >>> there
>> >>> >> > are some errors etc. and report back.
>> >>> >> >
>> >>> >> > btw: no release notes for 5.4 and 5.5 so far?
>> >>> >> > https://docs.gluster.org/en/latest/release-notes/ ?
>> >>> >> >
>> >>> >> > Am Fr., 15. März 2019 um 14:28 Uhr schrieb Shyam Ranganathan
>> >>> >> > <srangana at redhat.com>:
>> >>> >> > >
>> >>> >> > > We created a 5.5 release tag, and it is under packaging now. It
>> >>> should
>> >>> >> > > be packaged and ready for testing early next week and should be
>> >>> released
>> >>> >> > > close to mid-week next week.
>> >>> >> > >
>> >>> >> > > Thanks,
>> >>> >> > > Shyam
>> >>> >> > > On 3/13/19 12:34 PM, Artem Russakovskii wrote:
>> >>> >> > > > Wednesday now with no update :-/
>> >>> >> > > >
>> >>> >> > > > Sincerely,
>> >>> >> > > > Artem
>> >>> >> > > >
>> >>> >> > > > --
>> >>> >> > > > Founder, Android Police <http://www.androidpolice.com>, APK
>> >>> Mirror
>> >>> >> > > > <http://www.apkmirror.com/>, Illogical Robot LLC
>> >>> >> > > > beerpla.net <http://beerpla.net/> | +ArtemRussakovskii
>> >>> >> > > > <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
>> >>> >> > > > <http://twitter.com/ArtemR>
>> >>> >> > > >
>> >>> >> > > >
>> >>> >> > > > On Tue, Mar 12, 2019 at 10:28 AM Artem Russakovskii <
>> >>> archon810 at gmail.com
>> >>> >> > > > <mailto:archon810 at gmail.com>> wrote:
>> >>> >> > > >
>> >>> >> > > > Hi Amar,
>> >>> >> > > >
>> >>> >> > > > Any updates on this? I'm still not seeing it in OpenSUSE
>> >>> build
>> >>> >> > > > repos. Maybe later today?
>> >>> >> > > >
>> >>> >> > > > Thanks.
>> >>> >> > > >
>> >>> >> > > > Sincerely,
>> >>> >> > > > Artem
>> >>> >> > > >
>> >>> >> > > > --
>> >>> >> > > > Founder, Android Police <http://www.androidpolice.com>,
>> >>> APK Mirror
>> >>> >> > > > <http://www.apkmirror.com/>, Illogical Robot LLC
>> >>> >> > > > beerpla.net <http://beerpla.net/> | +ArtemRussakovskii
>> >>> >> > > > <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
>> >>> >> > > > <http://twitter.com/ArtemR>
>> >>> >> > > >
>> >>> >> > > >
>> >>> >> > > > On Wed, Mar 6, 2019 at 10:30 PM Amar Tumballi
>> Suryanarayan
>> >>> >> > > > <atumball at redhat.com <mailto:atumball at redhat.com>>
>> wrote:
>> >>> >> > > >
>> >>> >> > > > We are talking days. Not weeks. Considering already
>> it
>> >>> is
>> >>> >> > > > Thursday here. 1 more day for tagging, and packaging.
>> >>> May be ok
>> >>> >> > > > to expect it on Monday.
>> >>> >> > > >
>> >>> >> > > > -Amar
>> >>> >> > > >
>> >>> >> > > > On Thu, Mar 7, 2019 at 11:54 AM Artem Russakovskii
>> >>> >> > > > <archon810 at gmail.com <mailto:archon810 at gmail.com>>
>> >>> wrote:
>> >>> >> > > >
>> >>> >> > > > Is the next release going to be an imminent
>> hotfix,
>> >>> i.e.
>> >>> >> > > > something like today/tomorrow, or are we talking
>> >>> weeks?
>> >>> >> > > >
>> >>> >> > > > Sincerely,
>> >>> >> > > > Artem
>> >>> >> > > >
>> >>> >> > > > --
>> >>> >> > > > Founder, Android Police <
>> >>> http://www.androidpolice.com>, APK
>> >>> >> > > > Mirror <http://www.apkmirror.com/>, Illogical
>> >>> Robot LLC
>> >>> >> > > > beerpla.net <http://beerpla.net/> |
>> >>> +ArtemRussakovskii
>> >>> >> > > > <https://plus.google.com/+ArtemRussakovskii> |
>> >>> @ArtemR
>> >>> >> > > > <http://twitter.com/ArtemR>
>> >>> >> > > >
>> >>> >> > > >
>> >>> >> > > > On Tue, Mar 5, 2019 at 11:09 AM Artem
>> Russakovskii
>> >>> >> > > > <archon810 at gmail.com <mailto:archon810 at gmail.com
>> >>
>> >>> wrote:
>> >>> >> > > >
>> >>> >> > > > Ended up downgrading to 5.3 just in case.
>> Peer
>> >>> status
>> >>> >> > > > and volume status are OK now.
>> >>> >> > > >
>> >>> >> > > > zypper install --oldpackage
>> >>> glusterfs-5.3-lp150.100.1
>> >>> >> > > > Loading repository data...
>> >>> >> > > > Reading installed packages...
>> >>> >> > > > Resolving package dependencies...
>> >>> >> > > >
>> >>> >> > > > Problem: glusterfs-5.3-lp150.100.1.x86_64
>> >>> requires
>> >>> >> > > > libgfapi0 = 5.3, but this requirement cannot
>> be
>> >>> provided
>> >>> >> > > > not installable providers:
>> >>> >> > > > libgfapi0-5.3-lp150.100.1.x86_64[glusterfs]
>> >>> >> > > > Solution 1: Following actions will be done:
>> >>> >> > > > downgrade of
>> libgfapi0-5.4-lp150.100.1.x86_64
>> >>> to
>> >>> >> > > > libgfapi0-5.3-lp150.100.1.x86_64
>> >>> >> > > > downgrade of
>> >>> libgfchangelog0-5.4-lp150.100.1.x86_64 to
>> >>> >> > > > libgfchangelog0-5.3-lp150.100.1.x86_64
>> >>> >> > > > downgrade of
>> libgfrpc0-5.4-lp150.100.1.x86_64
>> >>> to
>> >>> >> > > > libgfrpc0-5.3-lp150.100.1.x86_64
>> >>> >> > > > downgrade of
>> libgfxdr0-5.4-lp150.100.1.x86_64
>> >>> to
>> >>> >> > > > libgfxdr0-5.3-lp150.100.1.x86_64
>> >>> >> > > > downgrade of
>> >>> libglusterfs0-5.4-lp150.100.1.x86_64 to
>> >>> >> > > > libglusterfs0-5.3-lp150.100.1.x86_64
>> >>> >> > > > Solution 2: do not install
>> >>> glusterfs-5.3-lp150.100.1.x86_64
>> >>> >> > > > Solution 3: break
>> >>> glusterfs-5.3-lp150.100.1.x86_64 by
>> >>> >> > > > ignoring some of its dependencies
>> >>> >> > > >
>> >>> >> > > > Choose from above solutions by number or
>> cancel
>> >>> >> > > > [1/2/3/c] (c): 1
>> >>> >> > > > Resolving dependencies...
>> >>> >> > > > Resolving package dependencies...
>> >>> >> > > >
>> >>> >> > > > The following 6 packages are going to be
>> >>> downgraded:
>> >>> >> > > > glusterfs libgfapi0 libgfchangelog0
>> libgfrpc0
>> >>> >> > > > libgfxdr0 libglusterfs0
>> >>> >> > > >
>> >>> >> > > > 6 packages to downgrade.
>> >>> >> > > >
>> >>> >> > > > Sincerely,
>> >>> >> > > > Artem
>> >>> >> > > >
>> >>> >> > > > --
>> >>> >> > > > Founder, Android Police
>> >>> >> > > > <http://www.androidpolice.com>, APK Mirror
>> >>> >> > > > <http://www.apkmirror.com/>, Illogical Robot
>> >>> LLC
>> >>> >> > > > beerpla.net <http://beerpla.net/> |
>> >>> +ArtemRussakovskii
>> >>> >> > > > <https://plus.google.com/+ArtemRussakovskii>
>> |
>> >>> @ArtemR
>> >>> >> > > > <http://twitter.com/ArtemR>
>> >>> >> > > >
>> >>> >> > > >
>> >>> >> > > > On Tue, Mar 5, 2019 at 10:57 AM Artem
>> >>> Russakovskii
>> >>> >> > > > <archon810 at gmail.com <mailto:
>> >>> archon810 at gmail.com>> wrote:
>> >>> >> > > >
>> >>> >> > > > Noticed the same when upgrading from 5.3
>> to
>> >>> 5.4, as
>> >>> >> > > > mentioned.
>> >>> >> > > >
>> >>> >> > > > I'm confused though. Is actual
>> replication
>> >>> affected,
>> >>> >> > > > because the 5.4 server and the 3x 5.3
>> >>> servers still
>> >>> >> > > > show heal info as all 4 connected, and
>> the
>> >>> files
>> >>> >> > > > seem to be replicating correctly as well.
>> >>> >> > > >
>> >>> >> > > > So what's actually affected - just the
>> >>> status
>> >>> >> > > > command, or leaving 5.4 on one of the
>> nodes
>> >>> is doing
>> >>> >> > > > some damage to the underlying fs? Is it
>> >>> fixable by
>> >>> >> > > > tweaking transport.socket.ssl-enabled?
>> Does
>> >>> >> > > > upgrading all servers to 5.4 resolve it,
>> or
>> >>> should
>> >>> >> > > > we revert back to 5.3?
>> >>> >> > > >
>> >>> >> > > > Sincerely,
>> >>> >> > > > Artem
>> >>> >> > > >
>> >>> >> > > > --
>> >>> >> > > > Founder, Android Police
>> >>> >> > > > <http://www.androidpolice.com>, APK
>> Mirror
>> >>> >> > > > <http://www.apkmirror.com/>, Illogical
>> >>> Robot LLC
>> >>> >> > > > beerpla.net <http://beerpla.net/> |
>> >>> >> > > > +ArtemRussakovskii
>> >>> >> > > > <
>> https://plus.google.com/+ArtemRussakovskii
>> >>> >
>> >>> >> > > > | @ArtemR <http://twitter.com/ArtemR>
>> >>> >> > > >
>> >>> >> > > >
>> >>> >> > > > On Tue, Mar 5, 2019 at 2:02 AM Hu Bert
>> >>> >> > > > <revirii at googlemail.com
>> >>> >> > > > <mailto:revirii at googlemail.com>> wrote:
>> >>> >> > > >
>> >>> >> > > > fyi: did a downgrade 5.4 -> 5.3 and
>> it
>> >>> worked.
>> >>> >> > > > all replicas are up and
>> >>> >> > > > running. Awaiting updated v5.4.
>> >>> >> > > >
>> >>> >> > > > thx :-)
>> >>> >> > > >
>> >>> >> > > > Am Di., 5. März 2019 um 09:26 Uhr
>> >>> schrieb Hari
>> >>> >> > > > Gowtham <hgowtham at redhat.com
>> >>> >> > > > <mailto:hgowtham at redhat.com>>:
>> >>> >> > > > >
>> >>> >> > > > > There are plans to revert the patch
>> >>> causing
>> >>> >> > > > this error and rebuilt 5.4.
>> >>> >> > > > > This should happen faster. the
>> >>> rebuilt 5.4
>> >>> >> > > > should be void of this upgrade issue.
>> >>> >> > > > >
>> >>> >> > > > > In the meantime, you can use 5.3
>> for
>> >>> this cluster.
>> >>> >> > > > > Downgrading to 5.3 will work if it
>> >>> was just
>> >>> >> > > > one node that was upgrade to 5.4
>> >>> >> > > > > and the other nodes are still in
>> 5.3
>
>
>
> --
> Amar Tumballi (amarts)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190320/e5a16fd2/attachment.html>
More information about the Gluster-users
mailing list