[Gluster-users] Fwd: really large number of skipped files after a scrub

cYuSeDfZfb cYuSeDfZfb cyusedfzfb at gmail.com
Tue Dec 13 14:24:32 UTC 2022


Hi,

Apologies for sending the same post twice. Thanks for your answer, and the
interesting links you sent as well.

Output 'ps aux | grep bitd':
root       11365  0.0  0.6 898788 26628 ?        Ssl  Dec09   2:34
/usr/sbin/glusterfs -s localhost --volfile-id gluster/bitd -p
/var/run/gluster/bitd/bitd.pid -l /var/log/glusterfs/bitd.log -S
/var/run/gluster/15a9f68ce9a0ac37.socket --global-timer-wheel

I jus started another manual scrub, and the number of scrubbed files is now
higher.

Still wonder why the scheduled daily scrub is not happening...

Thanks again!
MJ


On Tue, 13 Dec 2022 at 14:48, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:

> By the way, what is the output of 'ps aux | grep bitd' ?
>
> Best Regards,
> Strahil Nikolov
>
> On Tue, Dec 13, 2022 at 15:45, Strahil Nikolov
> <hunter86_bg at yahoo.com> wrote:
> Based on https://bugzilla.redhat.com/show_bug.cgi?id=1299737#c12 , the
> previos name was 'number of unsigned files'.
>
> Signing seem to be a very complex process (see http://goo.gl/Mjy4mD ) and
> as far as I understand - those 'skipped' files were too new to be signed .
>
> If you do have RAID5/6 , I think that bitrod is unnecessary.
>
> Best Regards,
> Strahil Nikolov
>
> On Tue, Dec 13, 2022 at 12:33, cYuSeDfZfb cYuSeDfZfb
> <cyusedfzfb at gmail.com> wrote:
> Hi,
>
> I am running a PoC with cluster, and, as one does, I am trying to break
> and heal it.
>
> One of the things I am testing is scrubbing / healing.
>
> My cluster is created on ubuntu 20.04 with stock glusterfs 7.2, and my
> test volume info:
>
> Volume Name: gv0
> Type: Replicate
> Volume ID: 7c09100b-8095-4062-971f-2cea9fa8c2bc
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/data/brick1/gv0
> Brick2:  gluster2:/data/brick1/gv0
> Brick3:  gluster3:/data/brick1/gv0
> Options Reconfigured:
> features.scrub-freq: daily
> auth.allow: x.y.z.q
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> nfs.disable: on
> performance.client-io-threads: off
> features.bitrot: on
> features.scrub: Active
> features.scrub-throttle: aggressive
> storage.build-pgfid: on
>
> I have two issues:
>
> 1) scrubs are configured to run daily (see above) but they don't
> automatically happen. Do I need to configure something to actually get
> daily automatic scrubs?
>
> 2) A "scrub status" reports *many* skipped files, and only very few files
> that have actually been scrubbed. Why are so many files skipped?
>
> See:
>
> gluster volume bitrot gv0 scrub status
>
> Volume name : gv0
>
> State of scrub: Active (Idle)
>
> Scrub impact: aggressive
>
> Scrub frequency: daily
>
> Bitrot error log location: /var/log/glusterfs/bitd.log
>
> Scrubber error log location: /var/log/glusterfs/scrub.log
>
>
> =========================================================
>
> Node: localhost
>
> Number of Scrubbed files: 8112
>
> Number of Skipped files: 51209
>
> Last completed scrub time: 2022-12-10 04:36:55
>
> Duration of last scrub (D:M:H:M:S): 0:16:58:53
>
> Error count: 0
>
>
> =========================================================
>
> Node:  gluster3
>
> Number of Scrubbed files: 42
>
> Number of Skipped files: 59282
>
> Last completed scrub time: 2022-12-10 02:24:42
>
> Duration of last scrub (D:M:H:M:S): 0:16:58:15
>
> Error count: 0
>
>
> =========================================================
>
> Node:  gluster2
>
> Number of Scrubbed files: 42
>
> Number of Skipped files: 59282
>
> Last completed scrub time: 2022-12-10 02:24:29
>
> Duration of last scrub (D:M:H:M:S): 0:16:58:2
>
> Error count: 0
>
> =========================================================
>
> Thanks!
> MJ
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20221213/607b6261/attachment.html>


More information about the Gluster-users mailing list