[Bugs] [Bug 1311041] New: Tiering status and rebalance status stops getting updated
bugzilla at redhat.com
bugzilla at redhat.com
Tue Feb 23 09:04:58 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1311041
Bug ID: 1311041
Summary: Tiering status and rebalance status stops getting
updated
Product: GlusterFS
Version: 3.7.9
Component: glusterd
Keywords: Triaged, ZStream
Severity: medium
Priority: medium
Assignee: bugs at gluster.org
Reporter: rkavunga at redhat.com
CC: amukherj at redhat.com, bugs at gluster.org,
hgowtham at redhat.com, nchilaka at redhat.com,
storage-qa-internal at redhat.com
Depends On: 1302968, 1303028
Blocks: 1303269, 1310972
+++ This bug was initially created as a clone of Bug #1303028 +++
+++ This bug was initially created as a clone of Bug #1302968 +++
On my 16 node setup after about a day, 3 nodes in the rebalance status shows
the lapsed time reset to "ZERO" and again after about 4-5 hours, all the nodes
stopped ticking except only one node continued which is continually ticking.
Hence the promote/demote and scanned files stats have stopped getting updated
[root at dhcp37-202 ~]# gluster v rebal nagvol status
Node Rebalanced-files size
scanned failures skipped status run time in secs
--------- ----------- -----------
----------- ----------- ----------- ------------ --------------
localhost 2 0Bytes
35287 0 0 in progress 29986.00
10.70.37.195 0 0Bytes
35281 0 0 in progress 29986.00
10.70.35.155 0 0Bytes
35003 0 0 in progress 29986.00
10.70.35.222 0 0Bytes
35002 0 0 in progress 29986.00
10.70.35.108 0 0Bytes
0 0 0 in progress 29985.00
10.70.35.44 0 0Bytes
0 0 0 in progress 29986.00
10.70.35.89 0 0Bytes
0 0 0 in progress 146477.00
10.70.35.231 0 0Bytes
0 0 0 in progress 29986.00
10.70.35.176 0 0Bytes
35487 0 0 in progress 29986.00
10.70.35.232 0 0Bytes
0 0 0 in progress 0.00
10.70.35.173 0 0Bytes
0 0 0 in progress 0.00
10.70.35.163 0 0Bytes
35314 0 0 in progress 29986.00
10.70.37.101 0 0Bytes
0 0 0 in progress 0.00
10.70.37.69 0 0Bytes
35385 0 0 in progress 29986.00
10.70.37.60 0 0Bytes
35255 0 0 in progress 29986.00
10.70.37.120 0 0Bytes
35250 0 0 in progress 29986.00
volume rebalance: nagvol: success
[root at dhcp37-202 ~]#
[root at dhcp37-202 ~]#
[root at dhcp37-202 ~]# gluster v rebal nagvol status
Node Rebalanced-files size
scanned failures skipped status run time in secs
--------- ----------- -----------
----------- ----------- ----------- ------------ --------------
localhost 2 0Bytes
35287 0 0 in progress 29986.00
10.70.37.195 0 0Bytes
35281 0 0 in progress 29986.00
10.70.35.155 0 0Bytes
35003 0 0 in progress 29986.00
10.70.35.222 0 0Bytes
35002 0 0 in progress 29986.00
10.70.35.108 0 0Bytes
0 0 0 in progress 29985.00
10.70.35.44 0 0Bytes
0 0 0 in progress 29986.00
10.70.35.89 0 0Bytes
0 0 0 in progress 146488.00
10.70.35.231 0 0Bytes
0 0 0 in progress 29986.00
10.70.35.176 0 0Bytes
35487 0 0 in progress 29986.00
10.70.35.232 0 0Bytes
0 0 0 in progress 0.00
10.70.35.173 0 0Bytes
0 0 0 in progress 0.00
10.70.35.163 0 0Bytes
35314 0 0 in progress 29986.00
10.70.37.101 0 0Bytes
0 0 0 in progress 0.00
10.70.37.69 0 0Bytes
35385 0 0 in progress 29986.00
10.70.37.60 0 0Bytes
35255 0 0 in progress 29986.00
10.70.37.120 0 0Bytes
35250 0 0 in progress 29986.00
Also, the tier status shows as belo:
[root at dhcp37-202 ~]# gluster v tier nagvol status
Node Promoted files Demoted files Status
--------- --------- --------- ---------
localhost 0 0 in progress
10.70.37.195 0 0 in progress
10.70.35.155 0 0 in progress
10.70.35.222 0 0 in progress
10.70.35.108 0 0 in progress
10.70.35.44 0 0 in progress
10.70.35.89 0 0 in progress
10.70.35.231 0 0 in progress
10.70.35.176 0 0 in progress
10.70.35.232 0 0 in progress
10.70.35.173 0 0 in progress
10.70.35.163 0 0 in progress
10.70.37.101 0 0 in progress
10.70.37.69 0 0 in progress
10.70.37.60 0 0 in progress
10.70.37.120 0 0 in progress
Tiering Migration Functionality: nagvol: success
-> I was running some IOs but not very heavy
-> Also, there was an nfs problem reported wrt music files, stopped palying
with permission denied
-> I saw files promotes happening
-> Also, the glusterd was restarted only on one of the nodes, in the last 2
days
glusterfs-client-xlators-3.7.5-17.el7rhgs.x86_64
glusterfs-server-3.7.5-17.el7rhgs.x86_64
gluster-nagios-addons-0.2.5-1.el7rhgs.x86_64
vdsm-gluster-4.16.30-1.3.el7rhgs.noarch
glusterfs-3.7.5-17.el7rhgs.x86_64
glusterfs-api-3.7.5-17.el7rhgs.x86_64
glusterfs-cli-3.7.5-17.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-17.el7rhgs.x86_64
glusterfs-debuginfo-3.7.5-17.el7rhgs.x86_64
gluster-nagios-common-0.2.3-1.el7rhgs.noarch
python-gluster-3.7.5-16.el7rhgs.noarch
glusterfs-libs-3.7.5-17.el7rhgs.x86_64
glusterfs-fuse-3.7.5-17.el7rhgs.x86_64
glusterfs-rdma-3.7.5-17.el7rhgs.x86_64
sosreports will be attached
--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-01-29
02:45:42 EST ---
This bug is automatically being proposed for the current z-stream release of
Red Hat Gluster Storage 3 by setting the release flag 'rhgs‑3.1.z' to '?'.
If this bug should be proposed for a different release, please manually change
the proposed release flag.
--- Additional comment from Vijay Bellur on 2016-01-29 05:57:58 EST ---
REVIEW: http://review.gluster.org/13319 (glusterd/rebalance: initialize defrag
variable after glusterd restart) posted (#1) for review on master by mohammed
rafi kc (rkavunga at redhat.com)
--- Additional comment from Vijay Bellur on 2016-01-29 12:35:09 EST ---
REVIEW: http://review.gluster.org/13319 (glusterd/rebalance: initialize defrag
variable after glusterd restart) posted (#2) for review on master by mohammed
rafi kc (rkavunga at redhat.com)
--- Additional comment from Vijay Bellur on 2016-01-30 03:35:01 EST ---
REVIEW: http://review.gluster.org/13319 (glusterd/rebalance: initialize defrag
variable after glusterd restart) posted (#3) for review on master by mohammed
rafi kc (rkavunga at redhat.com)
--- Additional comment from Vijay Bellur on 2016-01-31 12:51:07 EST ---
REVIEW: http://review.gluster.org/13319 (glusterd/rebalance: initialize defrag
variable after glusterd restart) posted (#4) for review on master by mohammed
rafi kc (rkavunga at redhat.com)
--- Additional comment from Vijay Bellur on 2016-02-22 06:26:39 EST ---
REVIEW: http://review.gluster.org/13319 (glusterd/rebalance: initialize defrag
variable after glusterd restart) posted (#5) for review on master by mohammed
rafi kc (rkavunga at redhat.com)
--- Additional comment from Vijay Bellur on 2016-02-23 00:42:08 EST ---
COMMIT: http://review.gluster.org/13319 committed in master by Atin Mukherjee
(amukherj at redhat.com)
------
commit a67331f3f79e827ffa4f7a547f6898e12407bbf9
Author: Mohammed Rafi KC <rkavunga at redhat.com>
Date: Fri Jan 29 16:24:02 2016 +0530
glusterd/rebalance: initialize defrag variable after glusterd restart
During reblance restart after glusterd restarted, we are not
connecting to rebalance process from glusterd, because the
defrag variable in volinfo will be null.
Initializing the variable will connect the rpc
Change-Id: Id820cad6a3634a9fc976427fbe1c45844d3d4b9b
BUG: 1303028
Signed-off-by: Mohammed Rafi KC <rkavunga at redhat.com>
Reviewed-on: http://review.gluster.org/13319
Smoke: Gluster Build System <jenkins at build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins at build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig at redhat.com>
CentOS-regression: Gluster Build System <jenkins at build.gluster.com>
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1302968
[Bug 1302968] Tiering status and rebalance status stops getting updated
https://bugzilla.redhat.com/show_bug.cgi?id=1303028
[Bug 1303028] Tiering status and rebalance status stops getting updated
https://bugzilla.redhat.com/show_bug.cgi?id=1303269
[Bug 1303269] After GlusterD restart, Remove-brick commit happening even
though data migration not completed.
https://bugzilla.redhat.com/show_bug.cgi?id=1310972
[Bug 1310972] After GlusterD restart, Remove-brick commit happening even
though data migration not completed.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list