<div dir="ltr"><div>Hi,</div><div><br></div><div>Apologies for sending the same post twice. Thanks for your answer, and the interesting links you sent as well.<br></div><div><br></div><div>Output
'ps aux | grep bitd':</div><div>root 11365 0.0 0.6 898788 26628 ? Ssl Dec09 2:34 /usr/sbin/glusterfs -s localhost --volfile-id gluster/bitd -p /var/run/gluster/bitd/bitd.pid -l /var/log/glusterfs/bitd.log -S /var/run/gluster/15a9f68ce9a0ac37.socket --global-timer-wheel</div><div><br></div><div>I jus started another manual scrub, and the number of scrubbed files is now higher.</div><div><br></div><div>Still wonder why the scheduled daily scrub is not happening...</div><div><br></div><div>Thanks again!</div><div>MJ<br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 13 Dec 2022 at 14:48, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">By the way, what is the output of 'ps aux | grep bitd' ?<div><br></div><div>Best Regards,</div><div>Strahil Nikolov <br><div> <br> <blockquote style="margin:0px 0px 20px"> <div style="font-family:Roboto,sans-serif;color:rgb(109,0,246)"> <div>On Tue, Dec 13, 2022 at 15:45, Strahil Nikolov</div><div><<a href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>> wrote:</div> </div> <div style="padding:10px 0px 0px 20px;margin:10px 0px 0px;border-left:1px solid rgb(109,0,246)"> <div id="m_-4752616945719779555yiv3824389411"><div>Based on <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1299737#c12" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1299737#c12</a> , the previos name was 'number of unsigned files'.<div><br clear="none"></div><div>Signing seem to be a very complex process (see <a href="http://goo.gl/Mjy4mD" target="_blank">http://goo.gl/Mjy4mD</a> ) and as far as I understand - those 'skipped' files were too new to be signed .</div><div><br clear="none"></div><div>If you do have RAID5/6 , I think that bitrod is unnecessary.</div><div><br clear="none"></div><div>Best Regards,</div><div>Strahil Nikolov <br clear="none"> <br clear="none"> <blockquote style="margin:0px 0px 20px"> <div id="m_-4752616945719779555yiv3824389411yqt02836"><div style="font-family:Roboto,sans-serif;color:rgb(109,0,246)"> <div>On Tue, Dec 13, 2022 at 12:33, cYuSeDfZfb cYuSeDfZfb</div><div><<a href="mailto:cyusedfzfb@gmail.com" target="_blank">cyusedfzfb@gmail.com</a>> wrote:</div> </div> <div style="padding:10px 0px 0px 20px;margin:10px 0px 0px;border-left:1px solid rgb(109,0,246)"> <div id="m_-4752616945719779555yiv3824389411"><div dir="ltr"><div><div dir="ltr"><div>Hi,</div><div><br clear="none"></div><div>I am running a PoC with cluster, and, as one does, I am trying to break and heal it.<br clear="none"></div><div><br clear="none"></div><div>One of the things I am testing is scrubbing / healing.</div><div><br clear="none"></div><div>My cluster is created on ubuntu 20.04 with stock glusterfs 7.2, and my test volume info:</div><div><br clear="none"></div>Volume Name: gv0<br clear="none">Type: Replicate<br clear="none">Volume ID: 7c09100b-8095-4062-971f-2cea9fa8c2bc<br clear="none">Status: Started<br clear="none">Snapshot Count: 0<br clear="none">Number of Bricks: 1 x 3 = 3<br clear="none">Transport-type: tcp<br clear="none">Bricks:<br clear="none">Brick1: gluster1:/data/brick1/gv0<br clear="none">Brick2:
gluster2:/data/brick1/gv0<br clear="none">Brick3:
gluster3:/data/brick1/gv0<br clear="none">Options Reconfigured:<br clear="none">features.scrub-freq: daily<br clear="none">auth.allow: x.y.z.q<br clear="none">transport.address-family: inet<br clear="none">storage.fips-mode-rchecksum: on<br clear="none">nfs.disable: on<br clear="none">performance.client-io-threads: off<br clear="none">features.bitrot: on<br clear="none">features.scrub: Active<br clear="none">features.scrub-throttle: aggressive<br clear="none">storage.build-pgfid: on<br clear="none"><div><br clear="none"></div><div>I have two issues:</div><div><br clear="none"></div>1) scrubs are configured to run daily (see above) but they don't automatically happen. Do I need to configure something to actually get daily automatic scrubs?<br clear="none"><div><br clear="none"></div><div>2) A "scrub status" reports *many* skipped files, and only very few files that have actually been scrubbed. Why are so many files skipped?</div><div><br clear="none"></div><div>See:<br clear="none"></div><div><div></div>
</div><div><br clear="none"></div><div>gluster volume bitrot gv0 scrub status<br clear="none"><br clear="none">Volume name : gv0<br clear="none"><br clear="none">State of scrub: Active (Idle)<br clear="none"><br clear="none">Scrub impact: aggressive<br clear="none"><br clear="none">Scrub frequency: daily<br clear="none"><br clear="none">Bitrot error log location: /var/log/glusterfs/bitd.log<br clear="none"><br clear="none">Scrubber error log location: /var/log/glusterfs/scrub.log<br clear="none"><br clear="none"><br clear="none">=========================================================<br clear="none"><br clear="none">Node: localhost<br clear="none"><br clear="none">Number of Scrubbed files: 8112<br clear="none"><br clear="none">Number of Skipped files: 51209<br clear="none"><br clear="none">Last completed scrub time: 2022-12-10 04:36:55<br clear="none"><br clear="none">Duration of last scrub (D:M:H:M:S): 0:16:58:53<br clear="none"><br clear="none">Error count: 0<br clear="none"><br clear="none"><br clear="none">=========================================================<br clear="none"><br clear="none">Node:
gluster3<br clear="none"><br clear="none">Number of Scrubbed files: 42<br clear="none"><br clear="none">Number of Skipped files: 59282<br clear="none"><br clear="none">Last completed scrub time: 2022-12-10 02:24:42<br clear="none"><br clear="none">Duration of last scrub (D:M:H:M:S): 0:16:58:15<br clear="none"><br clear="none">Error count: 0<br clear="none"><br clear="none"><br clear="none">=========================================================<br clear="none"><br clear="none">Node:
gluster2<br clear="none"><br clear="none">Number of Scrubbed files: 42<br clear="none"><br clear="none">Number of Skipped files: 59282<br clear="none"><br clear="none">Last completed scrub time: 2022-12-10 02:24:29<br clear="none"><br clear="none">Duration of last scrub (D:M:H:M:S): 0:16:58:2<br clear="none"><br clear="none">Error count: 0<br clear="none"><br clear="none">=========================================================</div><div><br clear="none"></div><div>Thanks!</div><div>MJ<br clear="none"></div></div>
</div></div>
</div>________<br clear="none"><br clear="none"><br clear="none"><br clear="none">Community Meeting Calendar:<br clear="none"><br clear="none">Schedule -<br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">Bridge: <a rel="nofollow noopener noreferrer" shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">Gluster-users mailing list<br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none"> </div></div> </blockquote></div></div></div> </div> </blockquote></div></div></blockquote></div>