<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p><br>
    </p>
    <p>Hi,</p>
    <p>If I understand this, to remove the "No space left on device"
      error I either have to clear up 10% space on each brick, or
      clean-up a lesser amount and reset cluster.min-free.  Is this
      correct?</p>
    <p>I have found the following command for resetting the
      cluster.min-free</p>
    <ul>
      <li>
        <pre>gluster volume set &lt;volume&gt; cluster.min-free-disk &lt;value&gt;</pre>
      </li>
    </ul>
    <p>Can this be done while the volume is live?  Does the
      &lt;value&gt; need to be an integer?</p>
    <p>Thanks</p>
    <p>Pat</p>
    <p><br>
    </p>
    <div class="moz-cite-prefix">On 3/10/20 2:45 PM, Pat Haley wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:a2854603-acbc-86af-95f4-826a1f07aece@mit.edu">
      <br>
      Hi,
      <br>
      <br>
      I get the following
      <br>
      <br>
      [root@mseas-data2 bricks]# gluster  volume get data-volume all |
      grep cluster.min-free
      <br>
      cluster.min-free-disk 10%
      <br>
      cluster.min-free-inodes 5%
      <br>
      <br>
      <br>
      On 3/10/20 2:34 PM, Strahil Nikolov wrote:
      <br>
      <blockquote type="cite">On March 10, 2020 8:14:41 PM GMT+02:00,
        Pat Haley <a class="moz-txt-link-rfc2396E" href="mailto:phaley@mit.edu">&lt;phaley@mit.edu&gt;</a> wrote:
        <br>
        <blockquote type="cite">HI,
          <br>
          <br>
          After some more poking around in the logs (specifically the
          brick logs)
          <br>
          <br>
            * brick1 &amp; brick2 have both been recording "No space
          left on device"
          <br>
              messages today (as recently at 15 minutes ago)
          <br>
            * brick3 last recorded a "No space left on device" message
          last night
          <br>
              around 10:30pm
          <br>
            * brick4 has no such messages in its log file
          <br>
          <br>
          Note brick1 &amp; brick2 are on one server, brick3 and brick4
          are on the
          <br>
          second server.
          <br>
          <br>
          Pat
          <br>
          <br>
          <br>
          On 3/10/20 11:51 AM, Pat Haley wrote:
          <br>
          <blockquote type="cite">Hi,
            <br>
            <br>
            We have developed a problem with Gluster reporting "No space
            left on
            <br>
            device." even though "df" of both the gluster filesystem and
            the
            <br>
            underlying bricks show space available (details below).  Our
            inode
            <br>
            usage is between 1-3%.  We are running gluster 3.7.11 in a
            <br>
          </blockquote>
          distributed
          <br>
          <blockquote type="cite">volume across 2 servers (2 bricks
            each). We have followed the thread
            <br>
            <br>
          </blockquote>
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/pipermail/gluster-users/2020-March/037821.html">https://lists.gluster.org/pipermail/gluster-users/2020-March/037821.html</a>
          <br>
          <br>
          <blockquote type="cite">but haven't found a solution yet.
            <br>
            <br>
            Last night we ran a rebalance which appeared successful (and
            have
            <br>
            since cleared up some more space which seems to have mainly
            been on
            <br>
            one brick).  There were intermittent erroneous "No space..."
            messages
            <br>
            last night, but they have become much more frequent today.
            <br>
            <br>
            Any help would be greatly appreciated.
            <br>
            <br>
            Thanks
            <br>
            <br>
            ---------------------------
            <br>
            [root@mseas-data2 ~]# df -h
            <br>
            ---------------------------
            <br>
            Filesystem      Size  Used Avail Use% Mounted on
            <br>
            /dev/sdb        164T  164T  324G 100% /mnt/brick2
            <br>
            /dev/sda        164T  164T  323G 100% /mnt/brick1
            <br>
            ---------------------------
            <br>
            [root@mseas-data2 ~]# df -i
            <br>
            ---------------------------
            <br>
            Filesystem         Inodes    IUsed      IFree IUse% Mounted
            on
            <br>
            /dev/sdb       1375470800 31207165 1344263635    3%
            /mnt/brick2
            <br>
            /dev/sda       1384781520 28706614 1356074906    3%
            /mnt/brick1
            <br>
            <br>
            ---------------------------
            <br>
            [root@mseas-data3 ~]# df -h
            <br>
            ---------------------------
            <br>
            /dev/sda               91T   91T  323G 100%
            /export/sda/brick3
            <br>
            /dev/mapper/vg_Data4-lv_Data4
            <br>
                                    91T   88T  3.4T  97%
            /export/sdc/brick4
            <br>
            ---------------------------
            <br>
            [root@mseas-data3 ~]# df -i
            <br>
            ---------------------------
            <br>
            /dev/sda              679323496  9822199  669501297    2%
            <br>
            /export/sda/brick3
            <br>
            /dev/mapper/vg_Data4-lv_Data4
            <br>
                                  3906272768 11467484 3894805284    1%
            <br>
            /export/sdc/brick4
            <br>
            <br>
            <br>
            <br>
            ---------------------------------------
            <br>
            [root@mseas-data2 ~]# gluster --version
            <br>
            ---------------------------------------
            <br>
            glusterfs 3.7.11 built on Apr 27 2016 14:09:22
            <br>
            Repository revision: git://git.gluster.com/glusterfs.git
            <br>
            Copyright (c) 2006-2011 Gluster Inc.
            <a class="moz-txt-link-rfc2396E" href="http://www.gluster.com">&lt;http://www.gluster.com&gt;</a>
            <br>
            GlusterFS comes with ABSOLUTELY NO WARRANTY.
            <br>
            You may redistribute copies of GlusterFS under the terms of
            the GNU
            <br>
            General Public License.
            <br>
            <br>
            <br>
            <br>
            -----------------------------------------
            <br>
            [root@mseas-data2 ~]# gluster volume info
            <br>
            -----------------------------------------
            <br>
            Volume Name: data-volume
            <br>
            Type: Distribute
            <br>
            Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
            <br>
            Status: Started
            <br>
            Number of Bricks: 4
            <br>
            Transport-type: tcp
            <br>
            Bricks:
            <br>
            Brick1: mseas-data2:/mnt/brick1
            <br>
            Brick2: mseas-data2:/mnt/brick2
            <br>
            Brick3: mseas-data3:/export/sda/brick3
            <br>
            Brick4: mseas-data3:/export/sdc/brick4
            <br>
            Options Reconfigured:
            <br>
            nfs.export-volumes: off
            <br>
            nfs.disable: on
            <br>
            performance.readdir-ahead: on
            <br>
            diagnostics.brick-sys-log-level: WARNING
            <br>
            nfs.exports-auth-enable: on
            <br>
            server.allow-insecure: on
            <br>
            auth.allow: *
            <br>
            disperse.eager-lock: off
            <br>
            performance.open-behind: off
            <br>
            performance.md-cache-timeout: 60
            <br>
            network.inode-lru-limit: 50000
            <br>
            diagnostics.client-log-level: ERROR
            <br>
            <br>
            <br>
            <br>
--------------------------------------------------------------
            <br>
            [root@mseas-data2 ~]# gluster volume status data-volume
            detail
            <br>
--------------------------------------------------------------
            <br>
            Status of volume: data-volume
            <br>
            <br>
          </blockquote>
------------------------------------------------------------------------------
          <br>
          <br>
          <blockquote type="cite">Brick                : Brick
            mseas-data2:/mnt/brick1
            <br>
            TCP Port             : 49154
            <br>
            RDMA Port            : 0
            <br>
            Online               : Y
            <br>
            Pid                  : 4601
            <br>
            File System          : xfs
            <br>
            Device               : /dev/sda
            <br>
            Mount Options        : rw
            <br>
            Inode Size           : 256
            <br>
            Disk Space Free      : 318.8GB
            <br>
            Total Disk Space     : 163.7TB
            <br>
            Inode Count          : 1365878288
            <br>
            Free Inodes          : 1337173596
            <br>
            <br>
          </blockquote>
------------------------------------------------------------------------------
          <br>
          <br>
          <blockquote type="cite">Brick                : Brick
            mseas-data2:/mnt/brick2
            <br>
            TCP Port             : 49155
            <br>
            RDMA Port            : 0
            <br>
            Online               : Y
            <br>
            Pid                  : 7949
            <br>
            File System          : xfs
            <br>
            Device               : /dev/sdb
            <br>
            Mount Options        : rw
            <br>
            Inode Size           : 256
            <br>
            Disk Space Free      : 319.8GB
            <br>
            Total Disk Space     : 163.7TB
            <br>
            Inode Count          : 1372421408
            <br>
            Free Inodes          : 1341219039
            <br>
            <br>
          </blockquote>
------------------------------------------------------------------------------
          <br>
          <br>
          <blockquote type="cite">Brick                : Brick
            mseas-data3:/export/sda/brick3
            <br>
            TCP Port             : 49153
            <br>
            RDMA Port            : 0
            <br>
            Online               : Y
            <br>
            Pid                  : 4650
            <br>
            File System          : xfs
            <br>
            Device               : /dev/sda
            <br>
            Mount Options        : rw
            <br>
            Inode Size           : 512
            <br>
            Disk Space Free      : 325.3GB
            <br>
            Total Disk Space     : 91.0TB
            <br>
            Inode Count          : 692001992
            <br>
            Free Inodes          : 682188893
            <br>
            <br>
          </blockquote>
------------------------------------------------------------------------------
          <br>
          <br>
          <blockquote type="cite">Brick                : Brick
            mseas-data3:/export/sdc/brick4
            <br>
            TCP Port             : 49154
            <br>
            RDMA Port            : 0
            <br>
            Online               : Y
            <br>
            Pid                  : 23772
            <br>
            File System          : xfs
            <br>
            Device               : /dev/mapper/vg_Data4-lv_Data4
            <br>
            Mount Options        : rw
            <br>
            Inode Size           : 256
            <br>
            Disk Space Free      : 3.4TB
            <br>
            Total Disk Space     : 90.9TB
            <br>
            Inode Count          : 3906272768
            <br>
            Free Inodes          : 3894809903
            <br>
            <br>
          </blockquote>
        </blockquote>
        Hi Pat,
        <br>
        <br>
        What is the output of:
        <br>
        gluster  volume get data-volume all | grep cluster.min-free
        <br>
        <br>
        1% of 164 T is  1640G , but in your case you have only 324G 
        which is way lower.
        <br>
        <br>
        Best Regards,
        <br>
        Strahil Nikolov
        <br>
      </blockquote>
      <br>
    </blockquote>
    <pre class="moz-signature" cols="72">-- 

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley                          Email:  <a class="moz-txt-link-abbreviated" href="mailto:phaley@mit.edu">phaley@mit.edu</a>
Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213                    <a class="moz-txt-link-freetext" href="http://web.mit.edu/phaley/www/">http://web.mit.edu/phaley/www/</a>
77 Massachusetts Avenue
Cambridge, MA  02139-4301
</pre>
  </body>
</html>