[Bugs] [Bug 1544461] 3.8 -> 3.10 rolling upgrade fails (same for 3.12 or 3.13) on Ubuntu 14

bugzilla at redhat.com bugzilla at redhat.com
Tue Feb 13 08:37:48 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1544461

Marc <alexandrumarcu at gmail.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|needinfo?(alexandrumarcu at gm |
                   |ail.com)                    |



--- Comment #7 from Marc <alexandrumarcu at gmail.com> ---
(In reply to Atin Mukherjee from comment #6)
> (In reply to Marc from comment #5)
> > Hi Atin,Hari,
> > 
> > I have deleted the "tier-enabled=0" line from the upgraded server but it
> > still does not work. If i restart the Gluster service the "info" files it is
> > regenerated and the "tier-enabled=0" line is added again.
> > 
> > If i delete and not restart i have the same :
> > 
> > root at 2-gls-dus21-ci-efood-real-de:/var/lib/glusterd/vols/gluster_volume#
> > gluster volume status
> > Status of volume: gluster_volume
> > Gluster process                             TCP Port  RDMA Port  Online  Pid
> > -----------------------------------------------------------------------------
> > -
> > Brick 2-gls-dus21-ci-efood-real-de.openstac
> > klocal:/export_vdb                          N/A       N/A        N       N/A
> > NFS Server on localhost                     N/A       N/A        N       N/A
> > 
> > Task Status of Volume gluster_volume
> > -----------------------------------------------------------------------------
> > -
> > There are no active volume tasks
> > 
> > 
> > If i delete and restart, the file gets the line back and the output is:
> > 
> > root at 2-gls-dus21-ci-efood-real-de:/var/lib/glusterd/vols/gluster_volume#
> > gluster volume status
> > Status of volume: gluster_volume
> > Gluster process                             TCP Port  RDMA Port  Online  Pid
> > -----------------------------------------------------------------------------
> > -
> > Brick 2-gls-dus21-ci-efood-real-de.openstac
> > klocal:/export_vdb                          49152     0          Y      
> > 26586
> > Self-heal Daemon on localhost               N/A       N/A        Y      
> > 26568
> > 
> > Task Status of Volume gluster_volume
> > -----------------------------------------------------------------------------
> > -
> > There are no active volume tasks
> 
> I see that your brick is up here. What's the output of peer status? If all
> peers are in befriended and connected state, we should be good. What's the
> difference between the last and the first step what you mentioned?

After restart from a 3.8.15 server:
root at 1-gls-dus10-ci-efood-real-de:/home/ubuntu# gluster peer status
Number of Peers: 4

Hostname: 3-gls-dus10-ci-efood-real-de.openstack.local
Uuid: 3d141235-9b93-4798-8e03-82a758216b0b
State: Peer in Cluster (Connected)

Hostname: 1-gls-dus21-ci-efood-real-de.openstacklocal
Uuid: 7488286f-6bfa-46f8-bc50-9ee815e96c66
State: Peer in Cluster (Connected)

Hostname: 2-gls-dus10-ci-efood-real-de.openstack.local
Uuid: 1617cd54-9b2a-439e-9aa6-30d4ecf303f8
State: Peer in Cluster (Connected)

Hostname: 2-gls-dus21-ci-efood-real-de.openstacklocal
Uuid: 0c698b11-9078-441a-9e7f-442befeef7a9
State: Peer Rejected (Connected)

After restart from the 3.10.10 server:

root at 2-gls-dus21-ci-efood-real-de:/home/ubuntu# gluster peer status
Number of Peers: 4

Hostname: 3-gls-dus10-ci-efood-real-de.openstack.local
Uuid: 3d141235-9b93-4798-8e03-82a758216b0b
State: Peer Rejected (Connected)

Hostname: 1-gls-dus21-ci-efood-real-de.openstacklocal
Uuid: 7488286f-6bfa-46f8-bc50-9ee815e96c66
State: Peer Rejected (Connected)

Hostname: 1-gls-dus10-ci-efood-real-de.openstack.local
Uuid: 00839049-2ade-48f8-b5f3-66db0e2b9377
State: Peer Rejected (Connected)

Hostname: 2-gls-dus10-ci-efood-real-de.openstack.local
Uuid: 1617cd54-9b2a-439e-9aa6-30d4ecf303f8
State: Peer Rejected (Connected)

I have deleted the "tier-enabled=0" line and without restart "gluster peer
status" is the same as above. I also tried to re-peer that upgraded server but
got:

root at 1-gls-dus10-ci-efood-real-de:/home/ubuntu# gluster peer detach
2-gls-dus21-ci-efood-real-de.openstacklocal
peer detach: failed: Brick(s) with the peer
2-gls-dus21-ci-efood-real-de.openstacklocal exist in cluster
root at 1-gls-dus10-ci-efood-real-de:/home/ubuntu# gluster peer probe
2-gls-dus21-ci-efood-real-de.openstacklocal
peer probe: success. Host 2-gls-dus21-ci-efood-real-de.openstacklocal port
24007 already in peer list

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=YWoSDW8dyr&a=cc_unsubscribe


More information about the Bugs mailing list