[Gluster-Maintainers] Upgrade issue when new mem type is added in libglusterfs
Atin Mukherjee
amukherj at redhat.com
Sat Jul 23 05:11:34 UTC 2016
Kotresh,
Can you please take care of capturing this in both 3.7.13 & 3.8.1 release
notes? Content is already available in the mail thread.
Thanks,
Atin
On Wed, Jul 20, 2016 at 8:10 PM, Niels de Vos <ndevos at redhat.com
<javascript:_e(%7B%7D,'cvml','ndevos at redhat.com');>> wrote:
> On Tue, Jul 12, 2016 at 11:26:46AM +0530, Atin Mukherjee wrote:
> > I still see the release notes for 3.8.1 & 3.7.13 not reflecting this
> > change.
> >
> > Niels, Kaushal,
> >
> > Shouldn't we highlight this as early as possible to the users given
> release
> > note is the best possible medium to capture all the known issues and the
> > work around?
>
> You can sent a patch to the release notes:
>
> https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.1.md
>
> The release notes of the previous version are normallky used/copied and
> modified for the new version. A new section "known issues" before the
> "Bugs addressed" would be good.
>
> Thanks,
> Niels
>
>
> > ~Atin
> >
> >
> > On Sat, Jul 9, 2016 at 10:02 PM, Atin Mukherjee <amukherj at redhat.com
> <javascript:_e(%7B%7D,'cvml','amukherj at redhat.com');>> wrote:
> >
> > > We have hit a bug 1347250 in downstream (applicable upstream too)
> where it
> > > was seen that glusterd didnt regenerate the volfiles when it was
> interimly
> > > brought up with upgrade mode by yum. Log file captured that gsyncd
> > > --version failed to execute and hence glusterd init couldnt proceed
> till
> > > the volfile regeneration. Since the ret code is not handled here in
> spec
> > > file users wouldnt come to know about this and going forward this is
> going
> > > to cause major issues in healing and all and finally it exploits the
> > > possibility of split brains at its best.
> > >
> > > Further analysis by Kotresh & Raghavendra Talur reveals that gsyncd
> failed
> > > here because of the compatibility issue where gsyncd was still not
> upgraded
> > > where as glusterfs-server was and this failure was mainly because of
> change
> > > in the mem type enum. We have seen a similar issue for RDMA as well
> > > (probably a year back). So to be very generic this can happen in any
> > > upgrade path from one version to another where new mem type is
> introduced.
> > > We have seen this from 3.7.8 to 3.7.12 and 3.8. People upgrading from
> 3.6
> > > to 3.7/3.8 will also experience this issue.
> > >
> > > Till we work on this fix, I suggest all the release managers to
> highlight
> > > this in the release note of the latest releases with the following work
> > > around after yum update:
> > >
> > > 1. grep -irns "geo-replication module not working as desired"
> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log | wc -l
> > >
> > > If the output is non-zero, then go to step 2 else follow the rest of
> the steps as per the guide.
> > >
> > > 2.Check if glusterd instance is running or not by 'ps aux | grep
> glusterd', if it is, then stop the glusterd service.
> > >
> > > 3. glusterd --xlator-option *.upgrade=on -N
> > >
> > > and then proceed ahead with rest of the steps as per the guide.
> > >
> > > Thoughts?
> > >
> > > P.S : this email is limited to maintainers till we decide on the
> approach
> > > to highlight this issues to the users
> > >
> > >
> > > --
> > > Atin
> > > Sent from iPhone
> > >
>
> > _______________________________________________
> > maintainers mailing list
> > maintainers at gluster.org
> <javascript:_e(%7B%7D,'cvml','maintainers at gluster.org');>
> > http://www.gluster.org/mailman/listinfo/maintainers
>
>
--
--Atin
--
Atin
Sent from iPhone
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/maintainers/attachments/20160723/a97c7396/attachment.html>
More information about the maintainers
mailing list