[Gluster-users] bitd.log and quotad.log flooding /var
Diego Zuccato
diego.zuccato at unibo.it
Thu Oct 27 10:31:14 UTC 2022
Seems it's accumulating again. ATM it's like this:
root 2134553 2.1 11.2 23071940 22091644 ? Ssl set23 1059:58
/usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p
/var/run/gluster/quotad/quotad.pid -l /var/log/glusterfs/quotad.log -S
/var/run/gluster/321cad6822171c64.socket --process-name quotad
Uptime is 77d.
The other 2 nodes are in the same situation.
Gluster is 9.5-1 amd64. Is it latest enough or should I plan a migration
to 10?
Hints?
Diego
Il 12/08/2022 22:18, Strahil Nikolov ha scritto:
> 75GB -> that's definately a memory leak.
> What version do you use ?
>
> If latest - open a github issue.
>
> Best Regards,
> Strahil Nikolov
>
> On Thu, Aug 11, 2022 at 10:06, Diego Zuccato
> <diego.zuccato at unibo.it> wrote:
> Yup.
>
> Seems the /etc/sysconfig/glusterd setting got finally applied and I now
> have a process like this:
> root 4107315 0.0 0.0 529244 40124 ? Ssl ago08 2:44
> /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level ERROR
> but bitd still spits out (some) 'I' lines
> [2022-08-11 07:02:21.072943 +0000] I [MSGID: 118016]
> [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:
> Triggering
> signing [{path=/extra/some/other/dirs/file.dat},
> {gfid=3e35b158-35a6-4e63-adbd-41075a11022e},
> {Brick-path=/srv/bricks/00/d}]
>
> Moreover I've had to disable quota, since quota processes were eating
> more than *75GB* RAM on each storage node! :(
>
> Il 11/08/2022 07:12, Strahil Nikolov ha scritto:
> > Have you decreased glusterd log level via:
> > glusterd --log-level WARNING|ERROR
> >
> > It seems that bitrot doesn't have it's own log level.
> >
> > As a workaround, you can configure syslog to send the logs only
> remotely
> > and thus preventing the overfill of the /var .
> >
> >
> > Best Regards,
> > Strahil Nikolov
> >
> > On Wed, Aug 10, 2022 at 7:52, Diego Zuccato
> > <diego.zuccato at unibo.it <mailto:diego.zuccato at unibo.it>> wrote:
> > Hi Strahil.
> >
> > Sure. Luckily I didn't delete 'em all :)
> >
> > From bitd.log:
> > -8<--
> > [2022-08-09 05:58:12.075999 +0000] I [MSGID: 118016]
> > [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:
> > Triggering
> > signing [{path=/astro/...omisis.../file.dat},
> > {gfid=5956af24-5efc-496c-8d7e-ea6656f298de},
> > {Brick-path=/srv/bricks/10/d}]
> > [2022-08-09 05:58:12.082264 +0000] I [MSGID: 118016]
> > [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:
> > Triggering
> > signing [{path=/astro/...omisis.../file.txt},
> > {gfid=afb75c03-0d29-414e-917a-ff718982c849},
> > {Brick-path=/srv/bricks/13/d}]
> > [2022-08-09 05:58:12.082267 +0000] I [MSGID: 118016]
> > [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:
> > Triggering
> > signing [{path=/astro/...omisis.../file.dat},
> > {gfid=982bc7a8-d4ba-45d7-9104-044e5d446802},
> > {Brick-path=/srv/bricks/06/d}]
> > [2022-08-09 05:58:12.084960 +0000] I [MSGID: 118016]
> > [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:
> > Triggering
> > signing [{path=/atmos/...omisis.../file},
> > {gfid=17e4dfb0-1f64-47a3-9aa8-b3fa05b7cd4e},
> > {Brick-path=/srv/bricks/15/d}]
> > [2022-08-09 05:58:12.089357 +0000] I [MSGID: 118016]
> > [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:
> > Triggering
> > signing [{path=/astro/...omisis.../file.txt},
> > {gfid=e70bf289-5aeb-43c2-aadd-d18979cf62b5},
> > {Brick-path=/srv/bricks/00/d}]
> > [2022-08-09 05:58:12.094440 +0000] I [MSGID: 100011]
> > [glusterfsd.c:1511:reincarnate] 0-glusterfsd: Fetching the
> volume file
> > from server... []
> > [2022-08-09 05:58:12.096299 +0000] I
> > [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs:
> Received list of
> > available volfile servers: clustor00:24007 clustor02:24007
> > [2022-08-09 05:58:12.096653 +0000] I [MSGID: 101221]
> > [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster:
> duplicate
> > entry for volfile-server [{errno=17}, {error=File già esistente}]
> > [2022-08-09 05:58:12.096853 +0000] I
> > [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No
> change in
> > volfile,continuing
> > [2022-08-09 05:58:12.096702 +0000] I [MSGID: 101221]
> > [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster:
> duplicate
> > entry for volfile-server [{errno=17}, {error=File già esistente}]
> > [2022-08-09 05:58:12.102176 +0000] I [MSGID: 118016]
> > [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:
> > Triggering
> > signing [{path=/astro/...omisis.../file.dat},
> > {gfid=45f59e3f-eef4-4ccf-baac-bc8bf10c5ced},
> > {Brick-path=/srv/bricks/09/d}]
> > [2022-08-09 05:58:12.106120 +0000] I [MSGID: 118016]
> > [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0:
> > Triggering
> > signing [{path=/astro/...omisis.../file.txt},
> > {gfid=216832dd-0a1c-4593-8a9e-f54d70efc637},
> > {Brick-path=/srv/bricks/13/d}]
> > -8<--
> >
> > And from quotad.log:
> > -<--
> > [2022-08-09 05:58:12.291030 +0000] I
> > [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs:
> Received list of
> > available volfile servers: clustor00:24007 clustor02:24007
> > [2022-08-09 05:58:12.291143 +0000] I [MSGID: 101221]
> > [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster:
> duplicate
> > entry for volfile-server [{errno=17}, {error=File già esistente}]
> > [2022-08-09 05:58:12.291653 +0000] I
> > [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No
> change in
> > volfile,continuing
> > [2022-08-09 05:58:12.292990 +0000] I
> > [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs:
> Received list of
> > available volfile servers: clustor00:24007 clustor02:24007
> > [2022-08-09 05:58:12.293204 +0000] I
> > [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs:
> Received list of
> > available volfile servers: clustor00:24007 clustor02:24007
> > [2022-08-09 05:58:12.293500 +0000] I
> > [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No
> change in
> > volfile,continuing
> > [2022-08-09 05:58:12.293663 +0000] I
> > [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No
> change in
> > volfile,continuing
> > The message "I [MSGID: 100011] [glusterfsd.c:1511:reincarnate]
> > 0-glusterfsd: Fetching the volume file from server... []"
> repeated 2
> > times between [2022-08-09 05:58:12.094470 +0000] and [2022-08-09
> > 05:58:12.291149 +0000]
> > The message "I [MSGID: 101221]
> > [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster:
> duplicate
> > entry for volfile-server [{errno=17}, {error=File già esistente}]"
> > repeated 5 times between [2022-08-09 05:58:12.291143 +0000] and
> > [2022-08-09 05:58:12.293234 +0000]
> > [2022-08-09 06:00:23.180856 +0000] I
> > [glusterfsd-mgmt.c:77:mgmt_cbk_spec] 0-mgmt: Volume file changed
> > [2022-08-09 06:00:23.324981 +0000] I
> > [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs:
> Received list of
> > available volfile servers: clustor00:24007 clustor02:24007
> > [2022-08-09 06:00:23.325025 +0000] I [MSGID: 101221]
> > [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster:
> duplicate
> > entry for volfile-server [{errno=17}, {error=File già esistente}]
> > [2022-08-09 06:00:23.325498 +0000] I
> > [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No
> change in
> > volfile,continuing
> > [2022-08-09 06:00:23.325046 +0000] I [MSGID: 101221]
> > [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster:
> duplicate
> > entry for volfile-server [{errno=17}, {error=File già esistente}]
> > [2022-08-09 22:00:07.364719 +0000] I [MSGID: 100011]
> > [glusterfsd.c:1511:reincarnate] 0-glusterfsd: Fetching the
> volume file
> > from server... []
> > [2022-08-09 22:00:07.374040 +0000] I
> > [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs:
> Received list of
> > available volfile servers: clustor00:24007 clustor02:24007
> > [2022-08-09 22:00:07.374099 +0000] I [MSGID: 101221]
> > [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster:
> duplicate
> > entry for volfile-server [{errno=17}, {error=File già esistente}]
> > [2022-08-09 22:00:07.374569 +0000] I
> > [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No
> change in
> > volfile,continuing
> > [2022-08-09 22:00:07.385610 +0000] I
> > [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs:
> Received list of
> > available volfile servers: clustor00:24007 clustor02:24007
> > [2022-08-09 22:00:07.386119 +0000] I
> > [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No
> change in
> > volfile,continuing
> > -8<--
> >
> > I've now used
> > gluster v set cluster_data diagnostics.brick-sys-log-level
> CRITICAL
> > and rate of filling decreased, but I still see many 'I' lines :(
> >
> > Using Gluster 9.5 packages from
> > deb [arch=amd64]
> >
> https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt <https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt>
> >
> <https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt <https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt>
> > >
> > bullseye main
> >
> > Tks,
> > Diego
> >
> > Il 09/08/2022 22:08, Strahil Nikolov ha scritto:
> > > Hey Diego,
> > >
> > > can you show a sample of such Info entries ?
> > >
> > > Best Regards,
> > > Strahil Nikolov
> > >
> > > On Mon, Aug 8, 2022 at 15:59, Diego Zuccato
> > > <diego.zuccato at unibo.it <mailto:diego.zuccato at unibo.it>
> <mailto:diego.zuccato at unibo.it <mailto:diego.zuccato at unibo.it>>> wrote:
> > > Hello all.
> > >
> > > Lately, I noticed some hickups in our Gluster volume.
> It's a
> > "replica 3
> > > arbiter 1" with many bricks (currently 90 data bricks
> over 3
> > servers).
> > >
> > > I tried to reduce log level by setting
> > > diagnostics.brick-log-level: ERROR
> > > diagnostics.client-log-level: ERROR
> > > and creating /etc/default/glusterd containing
> "LOG_LEVEL=ERROR".
> > > But I still see a lot of 'I' lines in the logs and have to
> > manually run
> > > logrotate way too often or /var gets too full.
> > >
> > > Any hints? What did I forget?
> > >
> > > Tks.
> > >
> > > --
> > > Diego Zuccato
> > > DIFA - Dip. di Fisica e Astronomia
> > > Servizi Informatici
> > > Alma Mater Studiorum - Università di Bologna
> > > V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> > > tel.: +39 051 20 95786
> > > ________
> > >
> > >
> > >
> > > Community Meeting Calendar:
> > >
> > > Schedule -
> > > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > > Bridge: https://meet.google.com/cpu-eiue-hvk
> <https://meet.google.com/cpu-eiue-hvk>
> > <https://meet.google.com/cpu-eiue-hvk
> <https://meet.google.com/cpu-eiue-hvk>>
> > > <https://meet.google.com/cpu-eiue-hvk
> <https://meet.google.com/cpu-eiue-hvk>
> > <https://meet.google.com/cpu-eiue-hvk
> <https://meet.google.com/cpu-eiue-hvk>>>
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org> <mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>
> > <mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org> <mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>>
> > > https://lists.gluster.org/mailman/listinfo/gluster-users
> <https://lists.gluster.org/mailman/listinfo/gluster-users>
> > <https://lists.gluster.org/mailman/listinfo/gluster-users
> <https://lists.gluster.org/mailman/listinfo/gluster-users>>
> > >
> <https://lists.gluster.org/mailman/listinfo/gluster-users
> <https://lists.gluster.org/mailman/listinfo/gluster-users>
> > <https://lists.gluster.org/mailman/listinfo/gluster-users
> <https://lists.gluster.org/mailman/listinfo/gluster-users>>>
>
> >
> > >
> >
> > --
> > Diego Zuccato
> > DIFA - Dip. di Fisica e Astronomia
> > Servizi Informatici
> > Alma Mater Studiorum - Università di Bologna
> > V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> > tel.: +39 051 20 95786
> >
>
> --
> Diego Zuccato
> DIFA - Dip. di Fisica e Astronomia
> Servizi Informatici
> Alma Mater Studiorum - Università di Bologna
> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> tel.: +39 051 20 95786
>
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
More information about the Gluster-users
mailing list