<div dir="ltr"><pre class="gmail-screen gmail-language-none gmail-codeblock gmail-codeblock--processed" style="box-sizing:border-box;overflow:visible;font-family:RedHatMono,"Red Hat Mono",Consolas,monospace;font-size:0.875rem;margin-top:0px;margin-bottom:0px;padding:1.25em 0px 1.25em 1em;line-height:1.6667;word-break:normal;color:rgb(21,21,21);background:rgb(248,248,248);border:0px;border-radius:0.25rem;max-height:max-content;max-width:99999em">gluster volume set testvol diagnostics.brick-log-level WARNING
gluster volume set testvol diagnostics.brick-sys-log-level WARNING
gluster volume set testvol diagnostics.client-log-level ERROR
gluster --log-level=ERROR volume status</pre><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br></p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em sex., 19 de jan. de 2024 às 05:49, Hu Bert <<a href="mailto:revirii@googlemail.com">revirii@googlemail.com</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Strahil,<br>
hm, don't get me wrong, it may sound a bit stupid, but... where do i<br>
set the log level? Using debian...<br>
<br>
<a href="https://access.redhat.com/documentation/de-de/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level" rel="noreferrer" target="_blank">https://access.redhat.com/documentation/de-de/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level</a><br>
<br>
ls /etc/glusterfs/<br>
eventsconfig.json glusterfs-georep-logrotate<br>
gluster-rsyslog-5.8.conf group-db-workload group-gluster-block<br>
group-nl-cache group-virt.example logger.conf.example<br>
glusterd.vol glusterfs-logrotate<br>
gluster-rsyslog-7.2.conf group-distributed-virt group-metadata-cache<br>
group-samba gsyncd.conf thin-arbiter.vol<br>
<br>
checked: /etc/glusterfs/logger.conf.example<br>
<br>
# To enable enhanced logging capabilities,<br>
#<br>
# 1. rename this file to /etc/glusterfs/logger.conf<br>
#<br>
# 2. rename /etc/rsyslog.d/gluster.conf.example to<br>
# /etc/rsyslog.d/gluster.conf<br>
#<br>
# This change requires restart of all gluster services/volumes and<br>
# rsyslog.<br>
<br>
tried (to test): /etc/glusterfs/logger.conf with " LOG_LEVEL='WARNING' "<br>
<br>
restart glusterd on that node, but this doesn't work, log-level stays<br>
on INFO. /etc/rsyslog.d/gluster.conf.example does not exist. Probably<br>
/etc/rsyslog.conf on debian. But first it would be better to know<br>
where to set the log-level for glusterd.<br>
<br>
Depending on how much the DEBUG log-level talks ;-) i could assign up<br>
to 100G to /var<br>
<br>
<br>
Thx & best regards,<br>
Hubert<br>
<br>
<br>
Am Do., 18. Jan. 2024 um 22:58 Uhr schrieb Strahil Nikolov<br>
<<a href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>>:<br>
><br>
> Are you able to set the logs to debug level ?<br>
> It might provide a clue what it is going on.<br>
><br>
> Best Regards,<br>
> Strahil Nikolov<br>
><br>
> On Thu, Jan 18, 2024 at 13:08, Diego Zuccato<br>
> <<a href="mailto:diego.zuccato@unibo.it" target="_blank">diego.zuccato@unibo.it</a>> wrote:<br>
> That's the same kind of errors I keep seeing on my 2 clusters,<br>
> regenerated some months ago. Seems a pseudo-split-brain that should be<br>
> impossible on a replica 3 cluster but keeps happening.<br>
> Sadly going to ditch Gluster ASAP.<br>
><br>
> Diego<br>
><br>
> Il 18/01/2024 07:11, Hu Bert ha scritto:<br>
> > Good morning,<br>
> > heal still not running. Pending heals now sum up to 60K per brick.<br>
> > Heal was starting instantly e.g. after server reboot with version<br>
> > 10.4, but doesn't with version 11. What could be wrong?<br>
> ><br>
> > I only see these errors on one of the "good" servers in glustershd.log:<br>
> ><br>
> > [2024-01-18 06:08:57.328480 +0000] W [MSGID: 114031]<br>
> > [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk] 0-workdata-client-0:<br>
> > remote operation failed.<br>
> > [{path=<gfid:cb39a1e4-2a4c-4727-861d-3ed9ef00681b>},<br>
> > {gfid=cb39a1e4-2a4c-4727-861d-3ed9e<br>
> > f00681b}, {errno=2}, {error=No such file or directory}]<br>
> > [2024-01-18 06:08:57.594051 +0000] W [MSGID: 114031]<br>
> > [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk] 0-workdata-client-1:<br>
> > remote operation failed.<br>
> > [{path=<gfid:3e9b178c-ae1f-4d85-ae47-fc539d94dd11>},<br>
> > {gfid=3e9b178c-ae1f-4d85-ae47-fc539<br>
> > d94dd11}, {errno=2}, {error=No such file or directory}]<br>
> ><br>
> > About 7K today. Any ideas? Someone?<br>
> ><br>
> ><br>
> > Best regards,<br>
> > Hubert<br>
> ><br>
> > Am Mi., 17. Jan. 2024 um 11:24 Uhr schrieb Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>>:<br>
> >><br>
> >> ok, finally managed to get all servers, volumes etc runnung, but took<br>
> >> a couple of restarts, cksum checks etc.<br>
> >><br>
> >> One problem: a volume doesn't heal automatically or doesn't heal at all.<br>
> >><br>
> >> gluster volume status<br>
> >> Status of volume: workdata<br>
> >> Gluster process TCP Port RDMA Port Online Pid<br>
> >> ------------------------------------------------------------------------------<br>
> >> Brick glusterpub1:/gluster/md3/workdata 58832 0 Y 3436<br>
> >> Brick glusterpub2:/gluster/md3/workdata 59315 0 Y 1526<br>
> >> Brick glusterpub3:/gluster/md3/workdata 56917 0 Y 1952<br>
> >> Brick glusterpub1:/gluster/md4/workdata 59688 0 Y 3755<br>
> >> Brick glusterpub2:/gluster/md4/workdata 60271 0 Y 2271<br>
> >> Brick glusterpub3:/gluster/md4/workdata 49461 0 Y 2399<br>
> >> Brick glusterpub1:/gluster/md5/workdata 54651 0 Y 4208<br>
> >> Brick glusterpub2:/gluster/md5/workdata 49685 0 Y 2751<br>
> >> Brick glusterpub3:/gluster/md5/workdata 59202 0 Y 2803<br>
> >> Brick glusterpub1:/gluster/md6/workdata 55829 0 Y 4583<br>
> >> Brick glusterpub2:/gluster/md6/workdata 50455 0 Y 3296<br>
> >> Brick glusterpub3:/gluster/md6/workdata 50262 0 Y 3237<br>
> >> Brick glusterpub1:/gluster/md7/workdata 52238 0 Y 5014<br>
> >> Brick glusterpub2:/gluster/md7/workdata 52474 0 Y 3673<br>
> >> Brick glusterpub3:/gluster/md7/workdata 57966 0 Y 3653<br>
> >> Self-heal Daemon on localhost N/A N/A Y 4141<br>
> >> Self-heal Daemon on glusterpub1 N/A N/A Y 5570<br>
> >> Self-heal Daemon on glusterpub2 N/A N/A Y 4139<br>
> >><br>
> >> "gluster volume heal workdata info" lists a lot of files per brick.<br>
> >> "gluster volume heal workdata statistics heal-count" shows thousands<br>
> >> of files per brick.<br>
> >> "gluster volume heal workdata enable" has no effect.<br>
> >><br>
> >> gluster volume heal workdata full<br>
> >> Launching heal operation to perform full self heal on volume workdata<br>
> >> has been successful<br>
> >> Use heal info commands to check status.<br>
> >><br>
> >> -> not doing anything at all. And nothing happening on the 2 "good"<br>
> >> servers in e.g. glustershd.log. Heal was working as expected on<br>
> >> version 10.4, but here... silence. Someone has an idea?<br>
> >><br>
> >><br>
> >> Best regards,<br>
> >> Hubert<br>
> >><br>
> >> Am Di., 16. Jan. 2024 um 13:44 Uhr schrieb Gilberto Ferreira<br>
> >> <<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>>:<br>
> >>><br>
> >>> Ah! Indeed! You need to perform an upgrade in the clients as well.<br>
> >>><br>
> >>><br>
> >>><br>
> >>><br>
> >>><br>
> >>><br>
> >>><br>
> >>><br>
> >>> Em ter., 16 de jan. de 2024 às 03:12, Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>> escreveu:<br>
> >>>><br>
> >>>> morning to those still reading :-)<br>
> >>>><br>
> >>>> i found this: <a href="https://docs.gluster.org/en/main/Troubleshooting/troubleshooting-glusterd/#common-issues-and-how-to-resolve-them" rel="noreferrer" target="_blank">https://docs.gluster.org/en/main/Troubleshooting/troubleshooting-glusterd/#common-issues-and-how-to-resolve-them</a><br>
> >>>><br>
> >>>> there's a paragraph about "peer rejected" with the same error message,<br>
> >>>> telling me: "Update the cluster.op-version" - i had only updated the<br>
> >>>> server nodes, but not the clients. So upgrading the cluster.op-version<br>
> >>>> wasn't possible at this time. So... upgrading the clients to version<br>
> >>>> 11.1 and then the op-version should solve the problem?<br>
> >>>><br>
> >>>><br>
> >>>> Thx,<br>
> >>>> Hubert<br>
> >>>><br>
> >>>> Am Mo., 15. Jan. 2024 um 09:16 Uhr schrieb Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>>:<br>
> >>>>><br>
> >>>>> Hi,<br>
> >>>>> just upgraded some gluster servers from version 10.4 to version 11.1.<br>
> >>>>> Debian bullseye & bookworm. When only installing the packages: good,<br>
> >>>>> servers, volumes etc. work as expected.<br>
> >>>>><br>
> >>>>> But one needs to test if the systems work after a daemon and/or server<br>
> >>>>> restart. Well, did a reboot, and after that the rebooted/restarted<br>
> >>>>> system is "out". Log message from working node:<br>
> >>>>><br>
> >>>>> [2024-01-15 08:02:21.585694 +0000] I [MSGID: 106163]<br>
> >>>>> [glusterd-handshake.c:1501:__glusterd_mgmt_hndsk_versions_ack]<br>
> >>>>> 0-management: using the op-version 100000<br>
> >>>>> [2024-01-15 08:02:21.589601 +0000] I [MSGID: 106490]<br>
> >>>>> [glusterd-handler.c:2546:__glusterd_handle_incoming_friend_req]<br>
> >>>>> 0-glusterd: Received probe from uuid:<br>
> >>>>> b71401c3-512a-47cb-ac18-473c4ba7776e<br>
> >>>>> [2024-01-15 08:02:23.608349 +0000] E [MSGID: 106010]<br>
> >>>>> [glusterd-utils.c:3824:glusterd_compare_friend_volume] 0-management:<br>
> >>>>> Version of Cksums sourceimages differ. local cksum = 2204642525,<br>
> >>>>> remote cksum = 1931483801 on peer gluster190<br>
> >>>>> [2024-01-15 08:02:23.608584 +0000] I [MSGID: 106493]<br>
> >>>>> [glusterd-handler.c:3819:glusterd_xfer_friend_add_resp] 0-glusterd:<br>
> >>>>> Responded to gluster190 (0), ret: 0, op_ret: -1<br>
> >>>>> [2024-01-15 08:02:23.613553 +0000] I [MSGID: 106493]<br>
> >>>>> [glusterd-rpc-ops.c:467:__glusterd_friend_add_cbk] 0-glusterd:<br>
> >>>>> Received RJT from uuid: b71401c3-512a-47cb-ac18-473c4ba7776e, host:<br>
> >>>>> gluster190, port: 0<br>
> >>>>><br>
> >>>>> peer status from rebooted node:<br>
> >>>>><br>
> >>>>> root@gluster190 ~ # gluster peer status<br>
> >>>>> Number of Peers: 2<br>
> >>>>><br>
> >>>>> Hostname: gluster189<br>
> >>>>> Uuid: 50dc8288-aa49-4ea8-9c6c-9a9a926c67a7<br>
> >>>>> State: Peer Rejected (Connected)<br>
> >>>>><br>
> >>>>> Hostname: gluster188<br>
> >>>>> Uuid: e15a33fe-e2f7-47cf-ac53-a3b34136555d<br>
> >>>>> State: Peer Rejected (Connected)<br>
> >>>>><br>
> >>>>> So the rebooted gluster190 is not accepted anymore. And thus does not<br>
> >>>>> appear in "gluster volume status". I then followed this guide:<br>
> >>>>><br>
> >>>>> <a href="https://gluster-documentations.readthedocs.io/en/latest/Administrator%20Guide/Resolving%20Peer%20Rejected/" rel="noreferrer" target="_blank">https://gluster-documentations.readthedocs.io/en/latest/Administrator%20Guide/Resolving%20Peer%20Rejected/</a><br>
> >>>>><br>
> >>>>> Remove everything under /var/lib/glusterd/ (except <a href="http://glusterd.info" rel="noreferrer" target="_blank">glusterd.info</a>) and<br>
> >>>>> restart glusterd service etc. Data get copied from other nodes,<br>
> >>>>> 'gluster peer status' is ok again - but the volume info is missing,<br>
> >>>>> /var/lib/glusterd/vols is empty. When syncing this dir from another<br>
> >>>>> node, the volume then is available again, heals start etc.<br>
> >>>>><br>
> >>>>> Well, and just to be sure that everything's working as it should,<br>
> >>>>> rebooted that node again - the rebooted node is kicked out again, and<br>
> >>>>> you have to restart bringing it back again.<br>
> >>>>><br>
> >>>>> Sry, but did i miss anything? Has someone experienced similar<br>
> >>>>> problems? I'll probably downgrade to 10.4 again, that version was<br>
> >>>>> working...<br>
> >>>>><br>
> >>>>><br>
> >>>>> Thx,<br>
> >>>>> Hubert<br>
> >>>> ________<br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>>> Community Meeting Calendar:<br>
> >>>><br>
> >>>> Schedule -<br>
> >>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
> >>>> Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
> >>>> Gluster-users mailing list<br>
> >>>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> >>>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> > ________<br>
> ><br>
> ><br>
> ><br>
> > Community Meeting Calendar:<br>
> ><br>
> > Schedule -<br>
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
> > Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
> --<br>
> Diego Zuccato<br>
> DIFA - Dip. di Fisica e Astronomia<br>
> Servizi Informatici<br>
> Alma Mater Studiorum - Università di Bologna<br>
> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br>
> tel.: +39 051 20 95786<br>
><br>
> ________<br>
><br>
><br>
><br>
> Community Meeting Calendar:<br>
><br>
> Schedule -<br>
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
> Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
> ________<br>
><br>
><br>
><br>
> Community Meeting Calendar:<br>
><br>
> Schedule -<br>
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
> Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>