By the way, what is the output of 'ps aux | grep bitd' ?<div><br></div><div>Best Regards,</div><div>Strahil Nikolov <br><div> <br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Tue, Dec 13, 2022 at 15:45, Strahil Nikolov</div><div><hunter86_bg@yahoo.com> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> <div id="yiv3824389411"><div>Based on https://bugzilla.redhat.com/show_bug.cgi?id=1299737#c12 , the previos name was 'number of unsigned files'.<div><br clear="none"></div><div>Signing seem to be a very complex process (see http://goo.gl/Mjy4mD ) and as far as I understand - those 'skipped' files were too new to be signed .</div><div><br clear="none"></div><div>If you do have RAID5/6 , I think that bitrod is unnecessary.</div><div><br clear="none"></div><div>Best Regards,</div><div>Strahil Nikolov <br clear="none"> <br clear="none"> <blockquote style="margin:0 0 20px 0;"> <div id="yiv3824389411yqt02836" class="yiv3824389411yqt2008756308"><div style="font-family:Roboto, sans-serif;color:#6D00F6;"> <div>On Tue, Dec 13, 2022 at 12:33, cYuSeDfZfb cYuSeDfZfb</div><div><cyusedfzfb@gmail.com> wrote:</div> </div> <div style="padding:10px 0 0 20px;margin:10px 0 0 0;border-left:1px solid #6D00F6;"> <div id="yiv3824389411"><div dir="ltr"><div class="yiv3824389411gmail_quote"><div dir="ltr"><div>Hi,</div><div><br clear="none"></div><div>I am running a PoC with cluster, and, as one does, I am trying to break and heal it.<br clear="none"></div><div><br clear="none"></div><div>One of the things I am testing is scrubbing / healing.</div><div><br clear="none"></div><div>My cluster is created on ubuntu 20.04 with stock glusterfs 7.2, and my test volume info:</div><div><br clear="none"></div>Volume Name: gv0<br clear="none">Type: Replicate<br clear="none">Volume ID: 7c09100b-8095-4062-971f-2cea9fa8c2bc<br clear="none">Status: Started<br clear="none">Snapshot Count: 0<br clear="none">Number of Bricks: 1 x 3 = 3<br clear="none">Transport-type: tcp<br clear="none">Bricks:<br clear="none">Brick1: gluster1:/data/brick1/gv0<br clear="none">Brick2:
gluster2:/data/brick1/gv0<br clear="none">Brick3:
gluster3:/data/brick1/gv0<br clear="none">Options Reconfigured:<br clear="none">features.scrub-freq: daily<br clear="none">auth.allow: x.y.z.q<br clear="none">transport.address-family: inet<br clear="none">storage.fips-mode-rchecksum: on<br clear="none">nfs.disable: on<br clear="none">performance.client-io-threads: off<br clear="none">features.bitrot: on<br clear="none">features.scrub: Active<br clear="none">features.scrub-throttle: aggressive<br clear="none">storage.build-pgfid: on<br clear="none"><div><br clear="none"></div><div>I have two issues:</div><div><br clear="none"></div>1) scrubs are configured to run daily (see above) but they don't automatically happen. Do I need to configure something to actually get daily automatic scrubs?<br clear="none"><div><br clear="none"></div><div>2) A "scrub status" reports *many* skipped files, and only very few files that have actually been scrubbed. Why are so many files skipped?</div><div><br clear="none"></div><div>See:<br clear="none"></div><div><div></div>
</div><div><br clear="none"></div><div>gluster volume bitrot gv0 scrub status<br clear="none"><br clear="none">Volume name : gv0<br clear="none"><br clear="none">State of scrub: Active (Idle)<br clear="none"><br clear="none">Scrub impact: aggressive<br clear="none"><br clear="none">Scrub frequency: daily<br clear="none"><br clear="none">Bitrot error log location: /var/log/glusterfs/bitd.log<br clear="none"><br clear="none">Scrubber error log location: /var/log/glusterfs/scrub.log<br clear="none"><br clear="none"><br clear="none">=========================================================<br clear="none"><br clear="none">Node: localhost<br clear="none"><br clear="none">Number of Scrubbed files: 8112<br clear="none"><br clear="none">Number of Skipped files: 51209<br clear="none"><br clear="none">Last completed scrub time: 2022-12-10 04:36:55<br clear="none"><br clear="none">Duration of last scrub (D:M:H:M:S): 0:16:58:53<br clear="none"><br clear="none">Error count: 0<br clear="none"><br clear="none"><br clear="none">=========================================================<br clear="none"><br clear="none">Node:
gluster3<br clear="none"><br clear="none">Number of Scrubbed files: 42<br clear="none"><br clear="none">Number of Skipped files: 59282<br clear="none"><br clear="none">Last completed scrub time: 2022-12-10 02:24:42<br clear="none"><br clear="none">Duration of last scrub (D:M:H:M:S): 0:16:58:15<br clear="none"><br clear="none">Error count: 0<br clear="none"><br clear="none"><br clear="none">=========================================================<br clear="none"><br clear="none">Node:
gluster2<br clear="none"><br clear="none">Number of Scrubbed files: 42<br clear="none"><br clear="none">Number of Skipped files: 59282<br clear="none"><br clear="none">Last completed scrub time: 2022-12-10 02:24:29<br clear="none"><br clear="none">Duration of last scrub (D:M:H:M:S): 0:16:58:2<br clear="none"><br clear="none">Error count: 0<br clear="none"><br clear="none">=========================================================</div><div><br clear="none"></div><div>Thanks!</div><div>MJ<br clear="none"></div></div>
</div></div>
</div>________<br clear="none"><br clear="none"><br clear="none"><br clear="none">Community Meeting Calendar:<br clear="none"><br clear="none">Schedule -<br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">Bridge: <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">Gluster-users mailing list<br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none"> </div></div> </blockquote></div></div></div> </div> </blockquote></div></div>