[Gluster-users] Disk use with GlusterFS

David Cunningham dcunningham at voisonics.com
Fri Mar 6 21:07:56 UTC 2020


Thank you, we will check inode usage. Are there any other suggestions?
We've put geo-replication in production and would like to avoid anything
like what fortunately happened first on the test system.


On Sat, 7 Mar 2020 at 00:06, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:

> On March 6, 2020 10:19:55 AM GMT+02:00, David Cunningham <
> dcunningham at voisonics.com> wrote:
> >Hi Hu.
> >
> >Just to clarify, what should we be looking for with "df -i"?
> >
> >
> >On Fri, 6 Mar 2020 at 18:51, Hu Bert <revirii at googlemail.com> wrote:
> >
> >> Hi,
> >>
> >> just a guess and easy to test/try: inodes? df -i?
> >>
> >> regards,
> >> Hubert
> >>
> >> Am Fr., 6. März 2020 um 04:42 Uhr schrieb David Cunningham
> >> <dcunningham at voisonics.com>:
> >> >
> >> > Hi Aravinda,
> >> >
> >> > That's what was reporting 54% used, at the same time that GlusterFS
> >was
> >> giving no space left on device errors. It's a bit worrying that
> >they're not
> >> reporting the same thing.
> >> >
> >> > Thank you.
> >> >
> >> >
> >> > On Fri, 6 Mar 2020 at 16:33, Aravinda VK <aravinda at kadalu.io>
> >wrote:
> >> >>
> >> >> Hi David,
> >> >>
> >> >> What is it reporting for brick’s `df` output?
> >> >>
> >> >> ```
> >> >> df /nodirectwritedata/gluster/gvol0
> >> >> ```
> >> >>
> >> >> —
> >> >> regards
> >> >> Aravinda Vishwanathapura
> >> >> https://kadalu.io
> >> >>
> >> >> On 06-Mar-2020, at 2:52 AM, David Cunningham
> ><dcunningham at voisonics.com>
> >> wrote:
> >> >>
> >> >> Hello,
> >> >>
> >> >> A major concern we have is that "df" was reporting only 54% used
> >and
> >> yet GlusterFS was giving "No space left on device" errors. We rely on
> >"df"
> >> to report the correct result to monitor the system and ensure
> >stability.
> >> Does anyone know what might have been going on here?
> >> >>
> >> >> Thanks in advance.
> >> >>
> >> >>
> >> >> On Thu, 5 Mar 2020 at 21:35, David Cunningham <
> >> dcunningham at voisonics.com> wrote:
> >> >>>
> >> >>> Hi Aravinda,
> >> >>>
> >> >>> Thanks for the reply. This test server is indeed the master
> >server for
> >> geo-replication to a slave.
> >> >>>
> >> >>> I'm really surprised that geo-replication simply keeps writing
> >logs
> >> until all space is consumed, without cleaning them up itself. I
> >didn't see
> >> any warning about it in the geo-replication install documentation
> >which is
> >> unfortunate. We'll come up with a solution to delete log files older
> >than
> >> the LAST_SYNCED time in the geo-replication status. Is anyone aware
> >of any
> >> other potential gotchas like this?
> >> >>>
> >> >>> Does anyone have an idea why in my previous note some space in
> >the 2GB
> >> GlusterFS partition apparently went missing? We had 0.47GB of data,
> >1GB
> >> reported used by .glusterfs, which even if they were separate files
> >would
> >> only add up to 1.47GB used, meaning 0.53GB should have been left in
> >the
> >> partition. If less space is actually being used because of the hard
> >links
> >> then it's even harder to understand where the other 1.53GB went. So
> >why
> >> would GlusterFS report "No space left on device"?
> >> >>>
> >> >>> Thanks again for any assistance.
> >> >>>
> >> >>>
> >> >>> On Thu, 5 Mar 2020 at 17:31, Aravinda VK <aravinda at kadalu.io>
> >wrote:
> >> >>>>
> >> >>>> Hi David,
> >> >>>>
> >> >>>> Is this Volume is uses Geo-replication? Geo-replication feature
> >> enables Changelog to identify the latest changes happening in the
> >GlusterFS
> >> volume.
> >> >>>>
> >> >>>> Content of .glusterfs directory also includes hardlinks to the
> >actual
> >> data, so the size shown in .glusterfs is including data. Please refer
> >the
> >> comment by Xavi
> >>
> >https://github.com/gluster/glusterfs/issues/833#issuecomment-594436009
> >> >>>>
> >> >>>> If Changelogs files are causing issue, you can use archival tool
> >to
> >> remove processed changelogs.
> >> >>>> https://github.com/aravindavk/archive_gluster_changelogs
> >> >>>>
> >> >>>> —
> >> >>>> regards
> >> >>>> Aravinda Vishwanathapura
> >> >>>> https://kadalu.io
> >> >>>>
> >> >>>>
> >> >>>> On 05-Mar-2020, at 9:02 AM, David Cunningham <
> >> dcunningham at voisonics.com> wrote:
> >> >>>>
> >> >>>> Hello,
> >> >>>>
> >> >>>> We are looking for some advice on disk use. This is on a single
> >node
> >> GlusterFS test server.
> >> >>>>
> >> >>>> There's a 2GB partition for GlusterFS. Of that, 470MB is used
> >for
> >> actual data, and 1GB is used by the .glusterfs directory. The
> >.glusterfs
> >> directory is mostly used by the two-character directories and the
> >> "changelogs" directory. Why is so much used by .glusterfs, and can we
> >> reduce that overhead?
> >> >>>>
> >> >>>> We also have a problem with this test system where GlusterFS is
> >> giving "No space left on device" errors. That's despite "df"
> >reporting only
> >> 54% used, and even if we add the 470MB to 1GB used above, that still
> >comes
> >> out to less than the 2GB available, so there should be some spare.
> >> >>>>
> >> >>>> Would anyone be able to advise on these please? Thank you in
> >advance.
> >> >>>>
> >> >>>> The GlusterFS version is 5.11 and here is the volume
> >information:
> >> >>>>
> >> >>>> Volume Name: gvol0
> >> >>>> Type: Distribute
> >> >>>> Volume ID: 33ed309b-0e63-4f9a-8132-ab1b0fdcbc36
> >> >>>> Status: Started
> >> >>>> Snapshot Count: 0
> >> >>>> Number of Bricks: 1
> >> >>>> Transport-type: tcp
> >> >>>> Bricks:
> >> >>>> Brick1: myhost:/nodirectwritedata/gluster/gvol0
> >> >>>> Options Reconfigured:
> >> >>>> transport.address-family: inet
> >> >>>> nfs.disable: on
> >> >>>> geo-replication.indexing: on
> >> >>>> geo-replication.ignore-pid-check: on
> >> >>>> changelog.changelog: on
> >> >>>>
> >> >>>> --
> >> >>>> David Cunningham, Voisonics Limited
> >> >>>> http://voisonics.com/
> >> >>>> USA: +1 213 221 1092
> >> >>>> New Zealand: +64 (0)28 2558 3782
> >> >>>> ________
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>> Community Meeting Calendar:
> >> >>>>
> >> >>>> Schedule -
> >> >>>> Every Tuesday at 14:30 IST / 09:00 UTC
> >> >>>> Bridge: https://bluejeans.com/441850968
> >> >>>>
> >> >>>> Gluster-users mailing list
> >> >>>> Gluster-users at gluster.org
> >> >>>> https://lists.gluster.org/mailman/listinfo/gluster-users
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>
> >> >>>
> >> >>> --
> >> >>> David Cunningham, Voisonics Limited
> >> >>> http://voisonics.com/
> >> >>> USA: +1 213 221 1092
> >> >>> New Zealand: +64 (0)28 2558 3782
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> David Cunningham, Voisonics Limited
> >> >> http://voisonics.com/
> >> >> USA: +1 213 221 1092
> >> >> New Zealand: +64 (0)28 2558 3782
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >
> >> >
> >> > --
> >> > David Cunningham, Voisonics Limited
> >> > http://voisonics.com/
> >> > USA: +1 213 221 1092
> >> > New Zealand: +64 (0)28 2558 3782
> >> > ________
> >> >
> >> >
> >> >
> >> > Community Meeting Calendar:
> >> >
> >> > Schedule -
> >> > Every Tuesday at 14:30 IST / 09:00 UTC
> >> > Bridge: https://bluejeans.com/441850968
> >> >
> >> > Gluster-users mailing list
> >> > Gluster-users at gluster.org
> >> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >>
>
> Ermm
>
> Inodes.
> If you run out of inodes  you got the error that you have no space,
> although you do have some space left, but no location to write inode
> metadata.
>
> Pure valid question.
>
> Best Regads,
> Strahil Nikolov
>


-- 
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200307/185c5a28/attachment.html>


More information about the Gluster-users mailing list