[Gluster-Maintainers] [gluster-packaging] glusterfs-6.0rc1 released
amukherj at redhat.com
Thu Mar 14 02:44:38 UTC 2019
If you were on rc0 and upgraded to rc1, then you are hitting BZ 1684029 I
believe. Can you please upgrade all the nodes to rc1, bump up the
op-version to 60000 (if not already done) and then restart glusterd
services to see if the peer rejection goes away?
On Thu, Mar 14, 2019 at 7:51 AM Guillaume Pavese <
guillaume.pavese at interactiv-group.com> wrote:
> putting users at gluster.org in the loop
> Guillaume Pavese
> Ingénieur Système et Réseau
> On Thu, Mar 14, 2019 at 11:04 AM Guillaume Pavese <
> guillaume.pavese at interactiv-group.com> wrote:
>> Hi, I am testing gluster6-rc1 on a replica 3 oVirt cluster (engine full
>> replica 3 and 2 other volume replica + arbiter). They were on Gluster6-rc0.
>> I upgraded one host that was having the "0-epoll: Failed to dispatch
>> handler" bug for one of its volumes, but now all three volumes are down!
>> "gluster peer status" now shows its 2 other peers as connected but
>> rejected. Should I upgrade the other nodes? They are still on Gluster6-rc0
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> On Wed, Mar 13, 2019 at 6:38 PM Niels de Vos <ndevos at redhat.com> wrote:
>>> On Wed, Mar 13, 2019 at 02:24:44AM +0000, jenkins at build.gluster.org
>>> > SRC:
>>> > HASH:
>>> Packages from the CentOS Storage SIG will become available shortly in
>>> the testing repository. Please use these packages to enable the repo and
>>> install the glusterfs components in a 2nd step.
>>> Once installed, the testing repo is enabled. Everything should be
>>> It is highly appreciated to let me know some results of the testing!
>>> packaging mailing list
>>> packaging at gluster.org
> maintainers mailing list
> maintainers at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the maintainers