[Bugs] [Bug 1288195] New: log improvements:- enabling quota on a volume reports numerous entries of "contribution node list is empty which is an error" in brick logs
bugzilla at redhat.com
bugzilla at redhat.com
Thu Dec 3 19:18:07 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1288195
Bug ID: 1288195
Summary: log improvements:- enabling quota on a volume reports
numerous entries of "contribution node list is empty
which is an error" in brick logs
Product: GlusterFS
Version: 3.5.5
Component: quota
Severity: medium
Assignee: bugs at gluster.org
Reporter: nvanlysel at morgridge.org
CC: bugs at gluster.org, gluster-bugs at redhat.com,
nsathyan at redhat.com, rwheeler at redhat.com,
shwetha.h.panduranga at redhat.com
I'm running into this error with 3.5.5.
After enabling quota on the volume the brick log started showing these messages
over and over:
[2015-11-30 19:59:23.255384] W [marker-quota.c:1298:mq_get_parent_inode_local]
(-->/usr/lib64/glusterfs/3.5.5/xlator/features/locks.so(pl_common_inodelk+0x29f)
[0x7fdb8495fbff]
(-->/usr/lib64/glusterfs/3.5.5/xlator/performance/io-threads.so(iot_inodelk_cbk+0xb9)
[0x7fdb8473a2b9]
(-->/usr/lib64/glusterfs/3.5.5/xlator/features/marker.so(mq_inodelk_cbk+0xcb)
[0x7fdb8431c5cb]))) 0-home-marker: contribution node list is empty which is an
error
Version-Release number of selected component (if applicable):
How reproducible:
often
Steps to Reproduce:
1. Create 8x2 distributed-replicate volume
2. Mount volume on client via fuse and write data to it using dd
2. Enable quota on volume
Additional info:
[root at storage-1 ~]# gluster volume info
Volume Name: home
Type: Distributed-Replicate
Volume ID: 2694f438-08f6-48fc-a072-324d4701f112
Status: Started
Number of Bricks: 8 x 2 = 16
Transport-type: tcp
Bricks:
Brick1: storage-7:/brick1/home
Brick2: storage-8:/brick1/home
Brick3: storage-9:/brick1/home
Brick4: storage-10:/brick1/home
Brick5: storage-1:/brick1/home
Brick6: storage-2:/brick1/home
Brick7: storage-3:/brick1/home
Brick8: storage-4:/brick1/home
Brick9: storage-5:/brick1/home
Brick10: storage-6:/brick1/home
Brick11: storage-11:/brick1/home
Brick12: storage-12:/brick1/home
Brick13: storage-13:/brick1/home
Brick14: storage-14:/brick1/home
Brick15: storage-15:/brick1/home
Brick16: storage-16:/brick1/home
Options Reconfigured:
performance.cache-size: 100MB
performance.write-behind-window-size: 100MB
nfs.disable: on
features.quota: on
features.default-soft-limit: 90%
GLUSTER SERVER PACKAGES:
[root at storage-1 ~]# rpm -qa |grep gluster
glusterfs-cli-3.5.5-2.el6.x86_64
glusterfs-server-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64
glusterfs-api-3.5.5-2.el6.x86_64
GLUSTER CLIENT PACKAGES:
[root at client-1 ~]# rpm -qa |grep gluster
glusterfs-api-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64
+++ This bug was initially created as a clone of Bug #812206 +++
Description of problem:
enabling quota on volume reports the following message numerous times:-
[2012-04-13 16:29:58.959196] W [marker-quota.c:1284:mq_get_parent_inode_local]
(-->/usr/local/lib/glusterfs/3.3.0qa34/xlator/performance/io-threads.so(iot_inodelk_cbk+
0x158) [0x7fd483df70a7]
(-->/usr/local/lib/libglusterfs.so.0(default_inodelk_cbk+0x158)
[0x7fd48c9fe83e] (-->/usr/local/lib/glusterfs/3.3.0qa34/xlator/features/marker.
so(mq_inodelk_cbk+0x1d0) [0x7fd4839cd912]))) 0-dstore-marker: contribution node
list is empty which is an error
1) listed 35000 entries in the brick log
2) CPU usage at that point of time on the glusterfsd process was more than 100%
Version-Release number of selected component (if applicable):
3.3.0qa34
How reproducible:
often
Steps to Reproduce:
1.create a replicate volume (1X3).start the volume
2.create fuse,nfs mounts. run "dd" in loop on both the mounts
3.add-bricks to make it distribute-replicate volume.
4.enable quota on that volume.
Additional info:
------------------
[04/13/12 - 16:36:55 root at APP-SERVER1 ~]# gluster volume info
Volume Name: dstore
Type: Distributed-Replicate
Volume ID: 3ff32886-6fd9-4fb3-95f7-ae5fd7e09b24
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 192.168.2.35:/export1/dstore1
Brick2: 192.168.2.36:/export1/dstore1
Brick3: 192.168.2.37:/export1/dstore2
Brick4: 192.168.2.35:/export2/dstore2
Brick5: 192.168.2.36:/export2/dstore2
Brick6: 192.168.2.37:/export2/dstore2
Options Reconfigured:
features.quota: on
Top command output:-
---------------------
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1286 root 20 0 772m 20m 1728 R 187.8 1.0 12:28.26 glusterfsd
1163 root 20 0 772m 26m 1732 R 9.0 1.3 21:13.84 glusterfsd
1380 root 20 0 303m 36m 1548 S 1.7 1.8 1:28.73 glusterfs
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1163 root 20 0 772m 26m 1732 S 143.8 1.3 22:39.23 glusterfsd
1380 root 20 0 303m 36m 1548 R 7.6 1.8 1:37.00 glusterfs
1286 root 20 0 772m 20m 1732 S 6.6 1.0 12:54.41 glusterfsd
--- Additional comment from vpshastry on 2013-02-28 08:36:35 EST ---
I couldn't observe the logs. I think http://review.gluster.org/3935 has solved
the issue. Can you confirm whether its still occurring?
--- Additional comment from Kaleb KEITHLEY on 2015-10-22 11:46:38 EDT ---
because of the large number of bugs filed against mainline version\ is
ambiguous and about to be removed as a choice.
If you believe this is still a bug, please change the status back to NEW and
choose the appropriate, applicable version for it.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list