[Gluster-Maintainers] [gluster-packaging] glusterfs-6.0rc1 released
Guillaume Pavese
guillaume.pavese at interactiv-group.com
Thu Mar 14 02:04:11 UTC 2019
Hi, I am testing gluster6-rc1 on a replica 3 oVirt cluster (engine full
replica 3 and 2 other volume replica + arbiter). They were on Gluster6-rc0.
I upgraded one host that was having the "0-epoll: Failed to dispatch
handler" bug for one of its volumes, but now all three volumes are down!
"gluster peer status" now shows its 2 other peers as connected but
rejected. Should I upgrade the other nodes? They are still on Gluster6-rc0
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Mar 13, 2019 at 6:38 PM Niels de Vos <ndevos at redhat.com> wrote:
> On Wed, Mar 13, 2019 at 02:24:44AM +0000, jenkins at build.gluster.org wrote:
> > SRC:
> https://build.gluster.org/job/release-new/81/artifact/glusterfs-6.0rc1.tar.gz
> > HASH:
> https://build.gluster.org/job/release-new/81/artifact/glusterfs-6.0rc1.sha512sum
>
> Packages from the CentOS Storage SIG will become available shortly in
> the testing repository. Please use these packages to enable the repo and
> install the glusterfs components in a 2nd step.
>
> el7:
> https://cbs.centos.org/kojifiles/work/tasks/3263/723263/centos-release-gluster6-0.9-1.el7.centos.noarch.rpm
> el6
> <https://cbs.centos.org/kojifiles/work/tasks/3263/723263/centos-release-gluster6-0.9-1.el7.centos.noarch.rpmel6>:
>
> https://cbs.centos.org/kojifiles/work/tasks/3265/723265/centos-release-gluster6-0.9-1.el6.centos.noarch.rpm
>
> Once installed, the testing repo is enabled. Everything should be
> available.
>
> It is highly appreciated to let me know some results of the testing!
>
> Thanks,
> Niels
> _______________________________________________
> packaging mailing list
> packaging at gluster.org
> https://lists.gluster.org/mailman/listinfo/packaging
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20190314/55dc3c2b/attachment.html>
More information about the maintainers
mailing list