[Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15
bugzilla at redhat.com
bugzilla at redhat.com
Tue Mar 19 13:47:50 UTC 2019
https://bugzilla.redhat.com/show_bug.cgi?id=1687051
--- Comment #25 from Amgad <amgad.saleh at nokia.com> ---
Any update, feedback or any investigation going on?
Any idea about the root cause/fix? will it be in 5.4?
I did more testing and realized that "gluster volume status" doesn't provide
the right status when rolled-back the 1st server, "gfs-1" to 3.12.15,
after the full upgrade (the other two replicas still on 4.1.4).
When rolled-back gfs-1, I got:
[root at gfs-1 ansible1]# gluster volume status
Status of volume: glustervol1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.76.153.206:/mnt/data1/1 N/A N/A N N/A
Task Status of Volume glustervol1
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: glustervol2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.76.153.206:/mnt/data2/2 N/A N/A N N/A
Task Status of Volume glustervol2
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: glustervol3
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.76.153.206:/mnt/data3/3 N/A N/A N N/A
Task Status of Volume glustervol3
------------------------------------------------------------------------------
There are no active volume tasks
Then when I rolled-back gfs-2 I got:
====================================
[root at gfs-2 ansible1]# gluster volume status
Status of volume: glustervol1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 23400
Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 14481
Self-heal Daemon on localhost N/A N/A Y 14472
Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390
Task Status of Volume glustervol1
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: glustervol2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 23409
Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 14490
Self-heal Daemon on localhost N/A N/A Y 14472
Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390
Task Status of Volume glustervol2
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: glustervol3
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 23418
Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 14499
Self-heal Daemon on localhost N/A N/A Y 14472
Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390
Task Status of Volume glustervol3
------------------------------------------------------------------------------
There are no active volume tasks
Then when rolled-back the third replica, I got the full status:
==============================================================
[root at gfs-3new ansible1]# gluster volume statusStatus of volume: glustervol1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 23400
Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 14481
Brick 10.76.153.207:/mnt/data1/1 49152 0 Y 13184
Self-heal Daemon on localhost N/A N/A Y 13174
Self-heal Daemon on 10.76.153.213 N/A N/A Y 14472
Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390
Task Status of Volume glustervol1
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: glustervol2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 23409
Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 14490
Brick 10.76.153.207:/mnt/data2/2 49153 0 Y 13193
Self-heal Daemon on localhost N/A N/A Y 13174
Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390
Self-heal Daemon on 10.76.153.213 N/A N/A Y 14472
Task Status of Volume glustervol2
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: glustervol3
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 23418
Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 14499
Brick 10.76.153.207:/mnt/data3/3 49154 0 Y 13202
Self-heal Daemon on localhost N/A N/A Y 13174
Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390
Self-heal Daemon on 10.76.153.213 N/A N/A Y 14472
Task Status of Volume glustervol3
------------------------------------------------------------------------------
There are no active volume tasks
--
You are receiving this mail because:
You are on the CC list for the bug.
More information about the Bugs
mailing list