[Bugs] [Bug 1302310] New: log improvements:- enabling quota on a volume reports numerous entries of "contribution node list is empty which is an error" in brick logs

bugzilla at redhat.com bugzilla at redhat.com
Wed Jan 27 13:01:54 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1302310

            Bug ID: 1302310
           Summary: log improvements:- enabling quota on a volume reports
                    numerous entries of "contribution node list is empty
                    which is an error" in brick logs
           Product: GlusterFS
           Version: 3.6.8
         Component: quota
          Keywords: Triaged
          Severity: medium
          Priority: high
          Assignee: bugs at gluster.org
          Reporter: mselvaga at redhat.com
                CC: bugs at gluster.org, mselvaga at redhat.com,
                    nsathyan at redhat.com, nvanlysel at morgridge.org,
                    rwheeler at redhat.com, shwetha.h.panduranga at redhat.com,
                    vmallika at redhat.com
        Depends On: 1288195



+++ This bug was initially created as a clone of Bug #1288195 +++

I'm running into this error with 3.5.5.
After enabling quota on the volume the brick log started showing these messages
over and over:
[2015-11-30 19:59:23.255384] W [marker-quota.c:1298:mq_get_parent_inode_local]
(-->/usr/lib64/glusterfs/3.5.5/xlator/features/locks.so(pl_common_inodelk+0x29f)
[0x7fdb8495fbff]
(-->/usr/lib64/glusterfs/3.5.5/xlator/performance/io-threads.so(iot_inodelk_cbk+0xb9)
[0x7fdb8473a2b9]
(-->/usr/lib64/glusterfs/3.5.5/xlator/features/marker.so(mq_inodelk_cbk+0xcb)
[0x7fdb8431c5cb]))) 0-home-marker: contribution node list is empty which is an
error


Version-Release number of selected component (if applicable):

How reproducible:
often

Steps to Reproduce:
1. Create 8x2 distributed-replicate volume
2. Mount volume on client via fuse and write data to it using dd
2. Enable quota on volume

Additional info:
[root at storage-1 ~]# gluster volume info
Volume Name: home
Type: Distributed-Replicate
Volume ID: 2694f438-08f6-48fc-a072-324d4701f112
Status: Started
Number of Bricks: 8 x 2 = 16
Transport-type: tcp
Bricks:
Brick1: storage-7:/brick1/home
Brick2: storage-8:/brick1/home
Brick3: storage-9:/brick1/home
Brick4: storage-10:/brick1/home
Brick5: storage-1:/brick1/home
Brick6: storage-2:/brick1/home
Brick7: storage-3:/brick1/home
Brick8: storage-4:/brick1/home
Brick9: storage-5:/brick1/home
Brick10: storage-6:/brick1/home
Brick11: storage-11:/brick1/home
Brick12: storage-12:/brick1/home
Brick13: storage-13:/brick1/home
Brick14: storage-14:/brick1/home
Brick15: storage-15:/brick1/home
Brick16: storage-16:/brick1/home
Options Reconfigured:
performance.cache-size: 100MB
performance.write-behind-window-size: 100MB
nfs.disable: on
features.quota: on
features.default-soft-limit: 90%


GLUSTER SERVER PACKAGES:
[root at storage-1 ~]# rpm -qa |grep gluster
glusterfs-cli-3.5.5-2.el6.x86_64
glusterfs-server-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64
glusterfs-api-3.5.5-2.el6.x86_64


GLUSTER CLIENT PACKAGES:
[root at client-1 ~]# rpm -qa |grep gluster
glusterfs-api-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64







+++ This bug was initially created as a clone of Bug #812206 +++

Description of problem:
enabling quota on volume reports the following message numerous times:-

[2012-04-13 16:29:58.959196] W [marker-quota.c:1284:mq_get_parent_inode_local]
(-->/usr/local/lib/glusterfs/3.3.0qa34/xlator/performance/io-threads.so(iot_inodelk_cbk+
0x158) [0x7fd483df70a7]
(-->/usr/local/lib/libglusterfs.so.0(default_inodelk_cbk+0x158)
[0x7fd48c9fe83e] (-->/usr/local/lib/glusterfs/3.3.0qa34/xlator/features/marker.
so(mq_inodelk_cbk+0x1d0) [0x7fd4839cd912]))) 0-dstore-marker: contribution node
list is empty which is an error

1) listed 35000 entries in the brick log
2) CPU usage at that point of time on the glusterfsd process was more than 100%


Version-Release number of selected component (if applicable):
3.3.0qa34

How reproducible:
often

Steps to Reproduce:
1.create a replicate volume (1X3).start the volume
2.create fuse,nfs mounts. run "dd" in loop on both the mounts
3.add-bricks to make it distribute-replicate volume. 
4.enable quota on that volume. 

Additional info:
------------------
[04/13/12 - 16:36:55 root at APP-SERVER1 ~]# gluster volume info

Volume Name: dstore
Type: Distributed-Replicate
Volume ID: 3ff32886-6fd9-4fb3-95f7-ae5fd7e09b24
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 192.168.2.35:/export1/dstore1
Brick2: 192.168.2.36:/export1/dstore1
Brick3: 192.168.2.37:/export1/dstore2
Brick4: 192.168.2.35:/export2/dstore2
Brick5: 192.168.2.36:/export2/dstore2
Brick6: 192.168.2.37:/export2/dstore2
Options Reconfigured:
features.quota: on

Top command output:-
---------------------

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND              
 1286 root      20   0  772m  20m 1728 R 187.8  1.0  12:28.26 glusterfsd        
 1163 root      20   0  772m  26m 1732 R  9.0  1.3  21:13.84 glusterfsd         
 1380 root      20   0  303m  36m 1548 S  1.7  1.8   1:28.73 glusterfs         

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND             
 1163 root      20   0  772m  26m 1732 S 143.8  1.3  22:39.23 glusterfsd        
 1380 root      20   0  303m  36m 1548 R  7.6  1.8   1:37.00 glusterfs          
 1286 root      20   0  772m  20m 1732 S  6.6  1.0  12:54.41 glusterfsd

--- Additional comment from vpshastry on 2013-02-28 08:36:35 EST ---

I couldn't observe the logs. I think http://review.gluster.org/3935 has solved
the issue. Can you confirm whether its still occurring?

--- Additional comment from Kaleb KEITHLEY on 2015-10-22 11:46:38 EDT ---

because of the large number of bugs filed against mainline version\ is
ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and
choose the appropriate, applicable version for it.

--- Additional comment from Vijaikumar Mallikarjuna on 2015-12-04 01:39:16 EST
---

Hi Neil Van,

This issue has been fixed in 3.7. Do you have plans to upgrade to 3.7?

Thanks,
Vijay

--- Additional comment from Neil Van Lysel on 2015-12-04 10:08:05 EST ---

Hi Vijay,

Thanks for the quick response. I do not plan on upgrading to 3.7. Is it
possible to backport this fix into the 3.5 branch?

Thanks,
Neil

--- Additional comment from Manikandan on 2015-12-04 10:29:50 EST ---

Hi Neil,

Thanks for your quick response too :-)

Since it's an older version, we need to check the regressions that the patch
can cause when we backport to 3.5. Surely, we will check on this soon and
depending on that, we will backport it so that the fix will be available in one
of the next upcoming minor release of 3.5.


--
Thanks & Regards,
Manikandan Selvaganesh.

--- Additional comment from Neil Van Lysel on 2015-12-04 10:36:25 EST ---

Cool! Thank you very much!!

Neil

--- Additional comment from Vijay Bellur on 2015-12-17 00:31:53 EST ---

REVIEW: http://review.gluster.org/12990 (quota : avoid "contribution node is
empty" error logs) posted (#1) for review on release-3.5 by Manikandan
Selvaganesh (mselvaga at redhat.com)

--- Additional comment from Manikandan on 2015-12-17 00:38:32 EST ---

Hi Neil,

I have back ported a patch to 3.5 that fixes the issue you have reported. Since
the entire marker and quota code has almost been re-factored, it's very hard
for us to back port all the fixes, also it could not be completely fixed with
the older approach. It would be better if you could upgrade to the latest
version. Mostly, you could expect this fix in the next upcoming minor release
of 3.5.


--
Thanks & Regards,
Manikandan Selvaganesh.

--- Additional comment from Neil Van Lysel on 2015-12-17 10:08:00 EST ---

Thanks!

Neil


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1288195
[Bug 1288195] log improvements:- enabling quota on a volume reports
numerous entries of "contribution node list is empty which is an error" in
brick logs
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list