[Gluster-users] Gluster 3.12.14: wrong quota in Distributed Dispersed Volume

Gudrun Mareike Amedick g.amedick at uni-luebeck.de
Tue Nov 20 10:23:50 UTC 2018


Hi,

I think I know what happened. According to the logs, the crawlers recieved a signum(15). They seemed to have died before having finished. Probably too
much to do simultaneously. I have disabled and re-enabled quota and will set the quotas again with more time.

Is there a way to restart a crawler that was killed too soon? 

If I restart a server while a crawler is running, will the crawler be restarted, too? We'll need to do some hardware fixing on one of the servers soon
and I need to know whether I have to check the crawlers first before shutting it down.

Thanks for the pointers

Gudrun Amedick
Am Dienstag, den 20.11.2018, 11:38 +0530 schrieb Hari Gowtham:
> Hi,
> 
> Can you check if the quota crawl finished? Without it having finished
> the quota list will show incorrect values.
> Looking at the under accounting, it looks like the crawl is not yet
> finished ( it does take a lot of time as it has to crawl the whole
> filesystem).
> 
> If the crawl has finished and the usage is still showing wrong values
> then there should be an accounting issue.
> The easy way to fix this is to try restarting quota. This will not
> cause any problems. The only downside is the limits won't hold true
> while the quota is disabled,
> till its enabled and the crawl finishes.
> Or you can try using the quota fsck script
> https://review.gluster.org/#/c/glusterfs/+/19179/ to fix your
> accounting issue.
> 
> Regards,
> Hari.
> On Mon, Nov 19, 2018 at 10:05 PM Frank Ruehlemann
> <f.ruehlemann at uni-luebeck.de> wrote:
> > 
> > 
> > Hi,
> > 
> > we're running a Distributed Dispersed volume with Gluster 3.12.14 at
> > Debian 9.6 (Stretch).
> > 
> > We migrated our data (>300TB) from a pure Distributed volume into this
> > Dispersed volume with cp, followed by multiple rsyncs.
> > After the migration was successful we enabled quotas again with "gluster
> > volume quota $VOLUME enable", which finished successfully.
> > And we set our required quotas with "gluster volume quota $VOLUME
> > limit-usage $PATH $QUOTA", which finished without errors too.
> > 
> > But our "gluster volume quota $VOLUME list" shows wrong values.
> > For example:
> > A directory with ~170TB of data shows only 40.8TB Used.
> > When we sum up all quoted directories we're way under the ~310TB that
> > "df -h /$volume" shows.
> > And "df -h /$volume/$directory" shows wrong values for nearly all
> > directories.
> > 
> > All 72 8TB-bricks and all quota deamons of the 6 servers are visible and
> > online in "gluster volume status $VOLUME".
> > 
> > 
> > In quotad.log I found multiple warnings like this:
> > > 
> > > [2018-11-16 09:21:25.738901] W [dict.c:636:dict_unref] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.12.14/xlator/features/quotad.so(+0x1d58)
> > > [0x7f6844be7d58] -->/usr/lib/x86_64-linux-gnu/glusterfs/3.12.14/xlator/features/quotad.so(+0x2b92) [0x7f6844be8b92] -->/usr/lib/x86_64-linux-
> > > gnu/libglusterfs.so.0(dict_unref+0xc0) [0x7f684b0db640] ) 0-dict: dict is NULL [Invalid argument]
> > In some brick logs I found those:
> > > 
> > > [2018-11-19 07:23:30.932327] I [MSGID: 120020] [quota.c:2198:quota_unlink_cbk] 0-$VOLUME-quota: quota context not set inode (gfid:f100f7a9-0779-
> > > 4b4c-880f-c8b3b4bdc49d) [Invalid argument]
> > and (replaced the volume name with "$VOLUME") those:
> > > 
> > > The message "W [MSGID: 120003] [quota.c:821:quota_build_ancestry_cbk] 0-$VOLUME-quota: parent is NULL [Invalid argument]" repeated 13 times
> > > between [2018-11-19 15:28:54.089404] and [2018-11-19 15:30:12.792175]
> > > [2018-11-19 15:31:34.559348] W [MSGID: 120003] [quota.c:821:quota_build_ancestry_cbk] 0-$VOLUME-quota: parent is NULL [Invalid argument]
> > I already found that setting the flag "trusted.glusterfs.quota.dirty" might help, but I'm unsure about the consequences that will be triggered.
> > And I'm unsure about the necessary version flag.
> > 
> > Has anyone an idea how to fix this?
> > 
> > Best Regards,
> > --
> > Frank Rühlemann
> >    IT-Systemtechnik
> > 
> > UNIVERSITÄT ZU LÜBECK
> >     IT-Service-Center
> > 
> >     Ratzeburger Allee 160
> >     23562 Lübeck
> >     Tel +49 451 3101 2034
> >     Fax +49 451 3101 2004
> >     ruehlemann at itsc.uni-luebeck.de
> >     www.itsc.uni-luebeck.de
> > 
> > 
> > 
> > 
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 6743 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20181120/f1b61d09/attachment.bin>


More information about the Gluster-users mailing list