<div dir="ltr"><div>Thank you, we will check inode usage. Are there any other suggestions? We've put geo-replication in production and would like to avoid anything like what fortunately happened first on the test system.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 7 Mar 2020 at 00:06, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On March 6, 2020 10:19:55 AM GMT+02:00, David Cunningham <<a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>> wrote:<br>
>Hi Hu.<br>
><br>
>Just to clarify, what should we be looking for with "df -i"?<br>
><br>
><br>
>On Fri, 6 Mar 2020 at 18:51, Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>> wrote:<br>
><br>
>> Hi,<br>
>><br>
>> just a guess and easy to test/try: inodes? df -i?<br>
>><br>
>> regards,<br>
>> Hubert<br>
>><br>
>> Am Fr., 6. März 2020 um 04:42 Uhr schrieb David Cunningham<br>
>> <<a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>>:<br>
>> ><br>
>> > Hi Aravinda,<br>
>> ><br>
>> > That's what was reporting 54% used, at the same time that GlusterFS<br>
>was<br>
>> giving no space left on device errors. It's a bit worrying that<br>
>they're not<br>
>> reporting the same thing.<br>
>> ><br>
>> > Thank you.<br>
>> ><br>
>> ><br>
>> > On Fri, 6 Mar 2020 at 16:33, Aravinda VK <<a href="mailto:aravinda@kadalu.io" target="_blank">aravinda@kadalu.io</a>><br>
>wrote:<br>
>> >><br>
>> >> Hi David,<br>
>> >><br>
>> >> What is it reporting for brick’s `df` output?<br>
>> >><br>
>> >> ```<br>
>> >> df /nodirectwritedata/gluster/gvol0<br>
>> >> ```<br>
>> >><br>
>> >> —<br>
>> >> regards<br>
>> >> Aravinda Vishwanathapura<br>
>> >> <a href="https://kadalu.io" rel="noreferrer" target="_blank">https://kadalu.io</a><br>
>> >><br>
>> >> On 06-Mar-2020, at 2:52 AM, David Cunningham<br>
><<a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>><br>
>> wrote:<br>
>> >><br>
>> >> Hello,<br>
>> >><br>
>> >> A major concern we have is that "df" was reporting only 54% used<br>
>and<br>
>> yet GlusterFS was giving "No space left on device" errors. We rely on<br>
>"df"<br>
>> to report the correct result to monitor the system and ensure<br>
>stability.<br>
>> Does anyone know what might have been going on here?<br>
>> >><br>
>> >> Thanks in advance.<br>
>> >><br>
>> >><br>
>> >> On Thu, 5 Mar 2020 at 21:35, David Cunningham <<br>
>> <a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>> wrote:<br>
>> >>><br>
>> >>> Hi Aravinda,<br>
>> >>><br>
>> >>> Thanks for the reply. This test server is indeed the master<br>
>server for<br>
>> geo-replication to a slave.<br>
>> >>><br>
>> >>> I'm really surprised that geo-replication simply keeps writing<br>
>logs<br>
>> until all space is consumed, without cleaning them up itself. I<br>
>didn't see<br>
>> any warning about it in the geo-replication install documentation<br>
>which is<br>
>> unfortunate. We'll come up with a solution to delete log files older<br>
>than<br>
>> the LAST_SYNCED time in the geo-replication status. Is anyone aware<br>
>of any<br>
>> other potential gotchas like this?<br>
>> >>><br>
>> >>> Does anyone have an idea why in my previous note some space in<br>
>the 2GB<br>
>> GlusterFS partition apparently went missing? We had 0.47GB of data,<br>
>1GB<br>
>> reported used by .glusterfs, which even if they were separate files<br>
>would<br>
>> only add up to 1.47GB used, meaning 0.53GB should have been left in<br>
>the<br>
>> partition. If less space is actually being used because of the hard<br>
>links<br>
>> then it's even harder to understand where the other 1.53GB went. So<br>
>why<br>
>> would GlusterFS report "No space left on device"?<br>
>> >>><br>
>> >>> Thanks again for any assistance.<br>
>> >>><br>
>> >>><br>
>> >>> On Thu, 5 Mar 2020 at 17:31, Aravinda VK <<a href="mailto:aravinda@kadalu.io" target="_blank">aravinda@kadalu.io</a>><br>
>wrote:<br>
>> >>>><br>
>> >>>> Hi David,<br>
>> >>>><br>
>> >>>> Is this Volume is uses Geo-replication? Geo-replication feature<br>
>> enables Changelog to identify the latest changes happening in the<br>
>GlusterFS<br>
>> volume.<br>
>> >>>><br>
>> >>>> Content of .glusterfs directory also includes hardlinks to the<br>
>actual<br>
>> data, so the size shown in .glusterfs is including data. Please refer<br>
>the<br>
>> comment by Xavi<br>
>><br>
><a href="https://github.com/gluster/glusterfs/issues/833#issuecomment-594436009" rel="noreferrer" target="_blank">https://github.com/gluster/glusterfs/issues/833#issuecomment-594436009</a><br>
>> >>>><br>
>> >>>> If Changelogs files are causing issue, you can use archival tool<br>
>to<br>
>> remove processed changelogs.<br>
>> >>>> <a href="https://github.com/aravindavk/archive_gluster_changelogs" rel="noreferrer" target="_blank">https://github.com/aravindavk/archive_gluster_changelogs</a><br>
>> >>>><br>
>> >>>> —<br>
>> >>>> regards<br>
>> >>>> Aravinda Vishwanathapura<br>
>> >>>> <a href="https://kadalu.io" rel="noreferrer" target="_blank">https://kadalu.io</a><br>
>> >>>><br>
>> >>>><br>
>> >>>> On 05-Mar-2020, at 9:02 AM, David Cunningham <<br>
>> <a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>> wrote:<br>
>> >>>><br>
>> >>>> Hello,<br>
>> >>>><br>
>> >>>> We are looking for some advice on disk use. This is on a single<br>
>node<br>
>> GlusterFS test server.<br>
>> >>>><br>
>> >>>> There's a 2GB partition for GlusterFS. Of that, 470MB is used<br>
>for<br>
>> actual data, and 1GB is used by the .glusterfs directory. The<br>
>.glusterfs<br>
>> directory is mostly used by the two-character directories and the<br>
>> "changelogs" directory. Why is so much used by .glusterfs, and can we<br>
>> reduce that overhead?<br>
>> >>>><br>
>> >>>> We also have a problem with this test system where GlusterFS is<br>
>> giving "No space left on device" errors. That's despite "df"<br>
>reporting only<br>
>> 54% used, and even if we add the 470MB to 1GB used above, that still<br>
>comes<br>
>> out to less than the 2GB available, so there should be some spare.<br>
>> >>>><br>
>> >>>> Would anyone be able to advise on these please? Thank you in<br>
>advance.<br>
>> >>>><br>
>> >>>> The GlusterFS version is 5.11 and here is the volume<br>
>information:<br>
>> >>>><br>
>> >>>> Volume Name: gvol0<br>
>> >>>> Type: Distribute<br>
>> >>>> Volume ID: 33ed309b-0e63-4f9a-8132-ab1b0fdcbc36<br>
>> >>>> Status: Started<br>
>> >>>> Snapshot Count: 0<br>
>> >>>> Number of Bricks: 1<br>
>> >>>> Transport-type: tcp<br>
>> >>>> Bricks:<br>
>> >>>> Brick1: myhost:/nodirectwritedata/gluster/gvol0<br>
>> >>>> Options Reconfigured:<br>
>> >>>> transport.address-family: inet<br>
>> >>>> nfs.disable: on<br>
>> >>>> geo-replication.indexing: on<br>
>> >>>> geo-replication.ignore-pid-check: on<br>
>> >>>> changelog.changelog: on<br>
>> >>>><br>
>> >>>> --<br>
>> >>>> David Cunningham, Voisonics Limited<br>
>> >>>> <a href="http://voisonics.com/" rel="noreferrer" target="_blank">http://voisonics.com/</a><br>
>> >>>> USA: +1 213 221 1092<br>
>> >>>> New Zealand: +64 (0)28 2558 3782<br>
>> >>>> ________<br>
>> >>>><br>
>> >>>><br>
>> >>>><br>
>> >>>> Community Meeting Calendar:<br>
>> >>>><br>
>> >>>> Schedule -<br>
>> >>>> Every Tuesday at 14:30 IST / 09:00 UTC<br>
>> >>>> Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
>> >>>><br>
>> >>>> Gluster-users mailing list<br>
>> >>>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> >>>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>> >>>><br>
>> >>>><br>
>> >>>><br>
>> >>>><br>
>> >>>><br>
>> >>>><br>
>> >>><br>
>> >>><br>
>> >>> --<br>
>> >>> David Cunningham, Voisonics Limited<br>
>> >>> <a href="http://voisonics.com/" rel="noreferrer" target="_blank">http://voisonics.com/</a><br>
>> >>> USA: +1 213 221 1092<br>
>> >>> New Zealand: +64 (0)28 2558 3782<br>
>> >><br>
>> >><br>
>> >><br>
>> >> --<br>
>> >> David Cunningham, Voisonics Limited<br>
>> >> <a href="http://voisonics.com/" rel="noreferrer" target="_blank">http://voisonics.com/</a><br>
>> >> USA: +1 213 221 1092<br>
>> >> New Zealand: +64 (0)28 2558 3782<br>
>> >><br>
>> >><br>
>> >><br>
>> >><br>
>> >><br>
>> >><br>
>> ><br>
>> ><br>
>> > --<br>
>> > David Cunningham, Voisonics Limited<br>
>> > <a href="http://voisonics.com/" rel="noreferrer" target="_blank">http://voisonics.com/</a><br>
>> > USA: +1 213 221 1092<br>
>> > New Zealand: +64 (0)28 2558 3782<br>
>> > ________<br>
>> ><br>
>> ><br>
>> ><br>
>> > Community Meeting Calendar:<br>
>> ><br>
>> > Schedule -<br>
>> > Every Tuesday at 14:30 IST / 09:00 UTC<br>
>> > Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
>> ><br>
>> > Gluster-users mailing list<br>
>> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>><br>
<br>
Ermm<br>
<br>
Inodes.<br>
If you run out of inodes you got the error that you have no space, although you do have some space left, but no location to write inode metadata.<br>
<br>
Pure valid question.<br>
<br>
Best Regads,<br>
Strahil Nikolov<br>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>David Cunningham, Voisonics Limited<br><a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>USA: +1 213 221 1092<br>New Zealand: +64 (0)28 2558 3782</div></div></div></div></div></div></div></div></div></div></div>