[Gluster-users] Quota going crazy

Vijaikumar M vmallika at redhat.com
Fri Aug 28 12:18:49 UTC 2015



On Friday 28 August 2015 05:33 PM, Jonathan MICHALON wrote:
> Thanks, good catch. I didn't find anything in quota*.log but didn't have a look in the bricks subdir…
>
> [2015-08-27 04:53:01.628979] W [marker-quota.c:1417:mq_release_parent_lock] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fcfbeeb5da6] (--> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_release_parent_lock+0x271)[0x7fcfb93a5691] (--> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_update_inode_contribution+0x3cc)[0x7fcfb93a631c] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_lookup_cbk+0xc0)[0x7fcfbeebb9c0] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_lookup_cbk+0xc0)[0x7fcfbeebb9c0] ))))) 0-img-data-marker: An operation during quota updation of path (/zone/programs/ProgsRX/Schrodinger2009/mmshare-v18212/lib/Linux-x86_64/lib/python2.6/site-packages/pytz/zoneinfo/Antarctica/Davis) failed (Invalid argument)
> [2015-08-27 04:53:02.240268] E [marker-quota.c:1186:mq_get_xattr] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fcfbeeb5da6] (--> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_fetch_child_size_and_contri+0x516)[0x7fcfb93a6a96] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_setxattr_cbk+0xa3)[0x7fcfbeebd363] (--> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/access-control.so(posix_acl_setxattr_cbk+0xa3)[0x7fcfb9dec003] (--> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/changelog.so(changelog_setxattr_cbk+0xe5)[0x7fcfb9ffc675] ))))) 0-: Assertion failed: !"uuid null"
> [2015-08-27 04:53:02.240352] E [marker-quota.c:1831:mq_fetch_child_size_and_contri] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fcfbeeb5da6] (--> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_fetch_child_size_and_contri+0x516)[0x7fcfb93a6a96] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_setxattr_cbk+0xa3)[0x7fcfbeebd363] (--> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/access-control.so(posix_acl_setxattr_cbk+0xa3)[0x7fcfb9dec003] (--> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/changelog.so(changelog_setxattr_cbk+0xe5)[0x7fcfb9ffc675] ))))) 0-: Assertion failed: !"uuid null"
> [2015-08-27 04:53:02.240452] E [posix.c:150:posix_lookup] 0-img-data-posix: lstat on (null) failed: Invalid argument
>
> Looks like the problem is about null gfid. Now I have no idea how it can happen to have a null gfid… :)

I will try to re-create this problem in glusterfs-3.6.4 and 
glusterfs-3.6.5, and I will let you on the root cause soon.

Thanks,
Vijay


> Searching for nulls I found a one in some xattrs:
> trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x00000a1bafc7f200
> This looks rather strange too, maybe it's related?
>
> --
> Jonathan Michalon
> P.S. sorry for bad formatting, but have to use OWA…
>
> ________________________________________
> De : Vijaikumar M <vmallika at redhat.com>
> Envoyé : vendredi 28 août 2015 11:11
> À : Jonathan MICHALON; gluster-users at gluster.org
> Objet : Re: [Gluster-users] Quota going crazy
>
> Hi Jonathan,
>
> Are there any error related to quota in the brick log?
>
> Thanks,
> Vijay
>
>
> On Friday 28 August 2015 12:22 PM, Jonathan MICHALON wrote:
>> Hi,
>>
>> I'm experiencing strange quota mismatch (too much/too few) with 3.6.4 on a setup which is already an upgrade from the 3.4 series.
>>
>> In an attempt to reset quota and check from scratch without breaking service I disabled quota and reset quota-related xattrs on every file on every brick (this is a 3×2 setup on 6 bricks of 40TB each).
>> I then re-enabled the quotas, waited a bit for the quota daemons to wakeup and then I launched a `find` on one of the quota-limited subdirectories. It computed the right size.
>> But on another (bigger) directory, the size was a little too small. I re-started the same `find`, and the final size was much much greater than the real size (provided by `du`). It should be around 4.1TB and it showed something like 5.4!
>> I relaunched the same `find` again and again but it continued to grow, until around 12.6 TB. Next I ran the `find` on another client and… again a growth.
>>
>> I'm running out of idea right now. If any of you had an idea about what I could do… thanks in advance.
>>
>> --
>> Jonathan Michalon
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list