[Gluster-users] Usage monitoring per user
kashif.alig at gmail.com
Wed May 2 08:45:41 UTC 2018
Hi Alex, John
Thanks for confirming my suspicion that there is no getting away from POSIX
tree traversal . I was aware of age-du but not robinhood.
On Wed, May 2, 2018 at 8:57 AM, JOHE (John Hearns) <JOHE at novozymes.com>
> I rather like agedu It probably does what you want.
> But as Mohammad says you do have to traverse your filesystem.
> agedu: track down wasted disk space - chiark home page
> agedu. a Unix utility for tracking down wasted disk space Introduction.
> Suppose you're running low on disk space. You need to free some up, by
> finding something that's a waste of space and deleting it (or moving it to
> an archive medium).
> *From:* gluster-users-bounces at gluster.org <gluster-users-bounces@
> gluster.org> on behalf of Alex Chekholko <alex at calicolabs.com>
> *Sent:* 01 May 2018 18:45
> *To:* mohammad kashif
> *Cc:* gluster-users
> *Subject:* Re: [Gluster-users] Usage monitoring per user
> There are several programs that will basically take the outputs of your
> scans and store the results in a database. If you size the database
> appropriately, then querying that database will be much quicker than
> querying the filesystem. But of course the results will be a little bit
> One such project is robinhood. https://github.com/cea-hpc/robinhood/wiki
> A simpler way might be to just have daily/weekly cron jobs that output
> text reports, without maintaining a separate database.
> But there is no way to avoid doing a recursive POSIX tree traversal, since
> that is how you get your info out of your filesystem.
> On Tue, May 1, 2018 at 5:30 AM, mohammad kashif <kashif.alig at gmail.com>
> Is there any easy way to find usage per user in Gluster? We have 300TB
> storage with almost 100 million files. Running du take too much time. Are
> people aware of any other tool which can be used to break up storage per
> Gluster-users mailing list
> Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users