75GB -> that's definately a memory leak.<div>What version do you use ?</div><div><br></div><div>If latest - open a github issue.</div><div><br></div><div>Best Regards,</div><div>Strahil Nikolov <br><div> <br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Thu, Aug 11, 2022 at 10:06, Diego Zuccato</div><div><diego.zuccato@unibo.it> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> Yup.<br clear="none"><br clear="none">Seems the /etc/sysconfig/glusterd setting got finally applied and I now <br clear="none">have a process like this:<br clear="none">root     4107315  0.0  0.0 529244 40124 ?        Ssl  ago08   2:44 <br clear="none">/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level ERROR<br clear="none">but bitd still spits out (some) 'I' lines<br clear="none">[2022-08-11 07:02:21.072943 +0000] I [MSGID: 118016] <br clear="none">[bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0: Triggering <br clear="none">signing [{path=/extra/some/other/dirs/file.dat}, <br clear="none">{gfid=3e35b158-35a6-4e63-adbd-41075a11022e}, {Brick-path=/srv/bricks/00/d}]<br clear="none"><br clear="none">Moreover I've had to disable quota, since quota processes were eating <br clear="none">more than *75GB* RAM on each storage node! :(<br clear="none"><br clear="none">Il 11/08/2022 07:12, Strahil Nikolov ha scritto:<br clear="none">> Have you decreased glusterd log level via:<br clear="none">> glusterd --log-level WARNING|ERROR<br clear="none">> <br clear="none">> It seems that bitrot doesn't have it's own log level.<br clear="none">> <br clear="none">> As a workaround, you can configure syslog to send the logs only remotely <br clear="none">> and thus preventing the overfill of the /var .<br clear="none">> <br clear="none">> <br clear="none">> Best Regards,<br clear="none">> Strahil Nikolov<br clear="none">> <br clear="none">>     On Wed, Aug 10, 2022 at 7:52, Diego Zuccato<br clear="none">>     <<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>> wrote:<br clear="none">>     Hi Strahil.<br clear="none">> <br clear="none">>     Sure. Luckily I didn't delete 'em all :)<br clear="none">> <br clear="none">>      From bitd.log:<br clear="none">>     -8<--<br clear="none">>     [2022-08-09 05:58:12.075999 +0000] I [MSGID: 118016]<br clear="none">>     [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:<br clear="none">>     Triggering<br clear="none">>     signing [{path=/astro/...omisis.../file.dat},<br clear="none">>     {gfid=5956af24-5efc-496c-8d7e-ea6656f298de},<br clear="none">>     {Brick-path=/srv/bricks/10/d}]<br clear="none">>     [2022-08-09 05:58:12.082264 +0000] I [MSGID: 118016]<br clear="none">>     [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:<br clear="none">>     Triggering<br clear="none">>     signing [{path=/astro/...omisis.../file.txt},<br clear="none">>     {gfid=afb75c03-0d29-414e-917a-ff718982c849},<br clear="none">>     {Brick-path=/srv/bricks/13/d}]<br clear="none">>     [2022-08-09 05:58:12.082267 +0000] I [MSGID: 118016]<br clear="none">>     [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:<br clear="none">>     Triggering<br clear="none">>     signing [{path=/astro/...omisis.../file.dat},<br clear="none">>     {gfid=982bc7a8-d4ba-45d7-9104-044e5d446802},<br clear="none">>     {Brick-path=/srv/bricks/06/d}]<br clear="none">>     [2022-08-09 05:58:12.084960 +0000] I [MSGID: 118016]<br clear="none">>     [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:<br clear="none">>     Triggering<br clear="none">>     signing [{path=/atmos/...omisis.../file},<br clear="none">>     {gfid=17e4dfb0-1f64-47a3-9aa8-b3fa05b7cd4e},<br clear="none">>     {Brick-path=/srv/bricks/15/d}]<br clear="none">>     [2022-08-09 05:58:12.089357 +0000] I [MSGID: 118016]<br clear="none">>     [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:<br clear="none">>     Triggering<br clear="none">>     signing [{path=/astro/...omisis.../file.txt},<br clear="none">>     {gfid=e70bf289-5aeb-43c2-aadd-d18979cf62b5},<br clear="none">>     {Brick-path=/srv/bricks/00/d}]<br clear="none">>     [2022-08-09 05:58:12.094440 +0000] I [MSGID: 100011]<br clear="none">>     [glusterfsd.c:1511:reincarnate] 0-glusterfsd: Fetching the volume file<br clear="none">>     from server... []<br clear="none">>     [2022-08-09 05:58:12.096299 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: Received list of<br clear="none">>     available volfile servers: clustor00:24007 clustor02:24007<br clear="none">>     [2022-08-09 05:58:12.096653 +0000] I [MSGID: 101221]<br clear="none">>     [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: duplicate<br clear="none">>     entry for volfile-server [{errno=17}, {error=File già esistente}]<br clear="none">>     [2022-08-09 05:58:12.096853 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No change in<br clear="none">>     volfile,continuing<br clear="none">>     [2022-08-09 05:58:12.096702 +0000] I [MSGID: 101221]<br clear="none">>     [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: duplicate<br clear="none">>     entry for volfile-server [{errno=17}, {error=File già esistente}]<br clear="none">>     [2022-08-09 05:58:12.102176 +0000] I [MSGID: 118016]<br clear="none">>     [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:<br clear="none">>     Triggering<br clear="none">>     signing [{path=/astro/...omisis.../file.dat},<br clear="none">>     {gfid=45f59e3f-eef4-4ccf-baac-bc8bf10c5ced},<br clear="none">>     {Brick-path=/srv/bricks/09/d}]<br clear="none">>     [2022-08-09 05:58:12.106120 +0000] I [MSGID: 118016]<br clear="none">>     [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:<br clear="none">>     Triggering<br clear="none">>     signing [{path=/astro/...omisis.../file.txt},<br clear="none">>     {gfid=216832dd-0a1c-4593-8a9e-f54d70efc637},<br clear="none">>     {Brick-path=/srv/bricks/13/d}]<br clear="none">>     -8<--<br clear="none">> <br clear="none">>     And from quotad.log:<br clear="none">>     -<--<br clear="none">>     [2022-08-09 05:58:12.291030 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: Received list of<br clear="none">>     available volfile servers: clustor00:24007 clustor02:24007<br clear="none">>     [2022-08-09 05:58:12.291143 +0000] I [MSGID: 101221]<br clear="none">>     [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: duplicate<br clear="none">>     entry for volfile-server [{errno=17}, {error=File già esistente}]<br clear="none">>     [2022-08-09 05:58:12.291653 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No change in<br clear="none">>     volfile,continuing<br clear="none">>     [2022-08-09 05:58:12.292990 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: Received list of<br clear="none">>     available volfile servers: clustor00:24007 clustor02:24007<br clear="none">>     [2022-08-09 05:58:12.293204 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: Received list of<br clear="none">>     available volfile servers: clustor00:24007 clustor02:24007<br clear="none">>     [2022-08-09 05:58:12.293500 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No change in<br clear="none">>     volfile,continuing<br clear="none">>     [2022-08-09 05:58:12.293663 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No change in<br clear="none">>     volfile,continuing<br clear="none">>     The message "I [MSGID: 100011] [glusterfsd.c:1511:reincarnate]<br clear="none">>     0-glusterfsd: Fetching the volume file from server... []" repeated 2<br clear="none">>     times between [2022-08-09 05:58:12.094470 +0000] and [2022-08-09<br clear="none">>     05:58:12.291149 +0000]<br clear="none">>     The message "I [MSGID: 101221]<br clear="none">>     [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: duplicate<br clear="none">>     entry for volfile-server [{errno=17}, {error=File già esistente}]"<br clear="none">>     repeated 5 times between [2022-08-09 05:58:12.291143 +0000] and<br clear="none">>     [2022-08-09 05:58:12.293234 +0000]<br clear="none">>     [2022-08-09 06:00:23.180856 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:77:mgmt_cbk_spec] 0-mgmt: Volume file changed<br clear="none">>     [2022-08-09 06:00:23.324981 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: Received list of<br clear="none">>     available volfile servers: clustor00:24007 clustor02:24007<br clear="none">>     [2022-08-09 06:00:23.325025 +0000] I [MSGID: 101221]<br clear="none">>     [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: duplicate<br clear="none">>     entry for volfile-server [{errno=17}, {error=File già esistente}]<br clear="none">>     [2022-08-09 06:00:23.325498 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No change in<br clear="none">>     volfile,continuing<br clear="none">>     [2022-08-09 06:00:23.325046 +0000] I [MSGID: 101221]<br clear="none">>     [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: duplicate<br clear="none">>     entry for volfile-server [{errno=17}, {error=File già esistente}]<br clear="none">>     [2022-08-09 22:00:07.364719 +0000] I [MSGID: 100011]<br clear="none">>     [glusterfsd.c:1511:reincarnate] 0-glusterfsd: Fetching the volume file<br clear="none">>     from server... []<br clear="none">>     [2022-08-09 22:00:07.374040 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: Received list of<br clear="none">>     available volfile servers: clustor00:24007 clustor02:24007<br clear="none">>     [2022-08-09 22:00:07.374099 +0000] I [MSGID: 101221]<br clear="none">>     [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: duplicate<br clear="none">>     entry for volfile-server [{errno=17}, {error=File già esistente}]<br clear="none">>     [2022-08-09 22:00:07.374569 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No change in<br clear="none">>     volfile,continuing<br clear="none">>     [2022-08-09 22:00:07.385610 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: Received list of<br clear="none">>     available volfile servers: clustor00:24007 clustor02:24007<br clear="none">>     [2022-08-09 22:00:07.386119 +0000] I<br clear="none">>     [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No change in<br clear="none">>     volfile,continuing<br clear="none">>     -8<--<br clear="none">> <br clear="none">>     I've now used<br clear="none">>        gluster v set cluster_data diagnostics.brick-sys-log-level CRITICAL<br clear="none">>     and rate of filling decreased, but I still see many 'I' lines :(<br clear="none">> <br clear="none">>     Using Gluster 9.5 packages from<br clear="none">>     deb [arch=amd64]<br clear="none">>     <a shape="rect" href="https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt" target="_blank">https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt</a><br clear="none">>     <<a shape="rect" href="https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt" target="_blank">https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt</a><br clear="none">>      ><br clear="none">>     bullseye main<br clear="none">> <br clear="none">>     Tks,<br clear="none">>        Diego<br clear="none">> <br clear="none">>     Il 09/08/2022 22:08, Strahil Nikolov ha scritto:<br clear="none">>      > Hey Diego,<br clear="none">>      ><br clear="none">>      > can you show a sample of such Info entries ?<br clear="none">>      ><br clear="none">>      > Best Regards,<br clear="none">>      > Strahil Nikolov<br clear="none">>      ><br clear="none">>      >    On Mon, Aug 8, 2022 at 15:59, Diego Zuccato<br clear="none">>      >    <<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a> <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>>> wrote:<br clear="none">>      >    Hello all.<br clear="none">>      ><br clear="none">>      >    Lately, I noticed some hickups in our Gluster volume. It's a<br clear="none">>     "replica 3<br clear="none">>      >    arbiter 1" with many bricks (currently 90 data bricks over 3<br clear="none">>     servers).<br clear="none">>      ><br clear="none">>      >    I tried to reduce log level by setting<br clear="none">>      >    diagnostics.brick-log-level: ERROR<br clear="none">>      >    diagnostics.client-log-level: ERROR<br clear="none">>      >    and creating /etc/default/glusterd containing "LOG_LEVEL=ERROR".<br clear="none">>      >    But I still see a lot of 'I' lines in the logs and have to<br clear="none">>     manually run<br clear="none">>      >    logrotate way too often or /var gets too full.<br clear="none">>      ><br clear="none">>      >    Any hints? What did I forget?<br clear="none">>      ><br clear="none">>      >    Tks.<br clear="none">>      ><br clear="none">>      >    --<br clear="none">>      >    Diego Zuccato<br clear="none">>      >    DIFA - Dip. di Fisica e Astronomia<br clear="none">>      >    Servizi Informatici<br clear="none">>      >    Alma Mater Studiorum - Università di Bologna<br clear="none">>      >    V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br clear="none">>      >    tel.: +39 051 20 95786<br clear="none">>      >    ________<br clear="none">>      ><br clear="none">>      ><br clear="none">>      ><br clear="none">>      >    Community Meeting Calendar:<br clear="none">>      ><br clear="none">>      >    Schedule -<br clear="none">>      >    Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">>      >    Bridge: <a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">>     <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a>><br clear="none">>      >    <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">>     <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a>>><br clear="none">>      >    Gluster-users mailing list<br clear="none">>      > <a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a> <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a> <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>>><br clear="none">>      > <a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">>     <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a>><br clear="none">>      >    <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">>     <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a>>><div class="yqt8222703080" id="yqtfd64622"><br clear="none">> <br clear="none">>      ><br clear="none">> <br clear="none">>     -- <br clear="none">>     Diego Zuccato<br clear="none">>     DIFA - Dip. di Fisica e Astronomia<br clear="none">>     Servizi Informatici<br clear="none">>     Alma Mater Studiorum - Università di Bologna<br clear="none">>     V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br clear="none">>     tel.: +39 051 20 95786<br clear="none">> <br clear="none"><br clear="none">-- <br clear="none">Diego Zuccato<br clear="none">DIFA - Dip. di Fisica e Astronomia<br clear="none">Servizi Informatici<br clear="none">Alma Mater Studiorum - Università di Bologna<br clear="none">V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br clear="none">tel.: +39 051 20 95786<br clear="none"></div> </div> </blockquote></div></div>