<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p><br>
</p>
<p>HI,</p>
<p>After some more poking around in the logs (specifically the brick
logs)</p>
<ul>
<li>brick1 & brick2 have both been recording "No space left on
device" messages today (as recently at 15 minutes ago)</li>
<li>brick3 last recorded a "No space left on device" message last
night around 10:30pm</li>
<li>brick4 has no such messages in its log file</li>
</ul>
<p>Note brick1 & brick2 are on one server, brick3 and brick4 are
on the second server.</p>
<p>Pat</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 3/10/20 11:51 AM, Pat Haley wrote:<br>
</div>
<blockquote type="cite"
cite="mid:c7da1b4f-9ec3-a2c3-1df2-f16b87def27d@mit.edu">
<br>
Hi,
<br>
<br>
We have developed a problem with Gluster reporting "No space left
on device." even though "df" of both the gluster filesystem and
the underlying bricks show space available (details below). Our
inode usage is between 1-3%. We are running gluster 3.7.11 in a
distributed volume across 2 servers (2 bricks each). We have
followed the thread
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/pipermail/gluster-users/2020-March/037821.html">https://lists.gluster.org/pipermail/gluster-users/2020-March/037821.html</a>
but haven't found a solution yet.
<br>
<br>
Last night we ran a rebalance which appeared successful (and have
since cleared up some more space which seems to have mainly been
on one brick). There were intermittent erroneous "No space..."
messages last night, but they have become much more frequent
today.
<br>
<br>
Any help would be greatly appreciated.
<br>
<br>
Thanks
<br>
<br>
---------------------------
<br>
[root@mseas-data2 ~]# df -h
<br>
---------------------------
<br>
Filesystem Size Used Avail Use% Mounted on
<br>
/dev/sdb 164T 164T 324G 100% /mnt/brick2
<br>
/dev/sda 164T 164T 323G 100% /mnt/brick1
<br>
---------------------------
<br>
[root@mseas-data2 ~]# df -i
<br>
---------------------------
<br>
Filesystem Inodes IUsed IFree IUse% Mounted on
<br>
/dev/sdb 1375470800 31207165 1344263635 3% /mnt/brick2
<br>
/dev/sda 1384781520 28706614 1356074906 3% /mnt/brick1
<br>
<br>
---------------------------
<br>
[root@mseas-data3 ~]# df -h
<br>
---------------------------
<br>
/dev/sda 91T 91T 323G 100% /export/sda/brick3
<br>
/dev/mapper/vg_Data4-lv_Data4
<br>
91T 88T 3.4T 97% /export/sdc/brick4
<br>
---------------------------
<br>
[root@mseas-data3 ~]# df -i
<br>
---------------------------
<br>
/dev/sda 679323496 9822199 669501297 2%
/export/sda/brick3
<br>
/dev/mapper/vg_Data4-lv_Data4
<br>
3906272768 11467484 3894805284 1%
/export/sdc/brick4
<br>
<br>
<br>
<br>
---------------------------------------
<br>
[root@mseas-data2 ~]# gluster --version
<br>
---------------------------------------
<br>
glusterfs 3.7.11 built on Apr 27 2016 14:09:22
<br>
Repository revision: git://git.gluster.com/glusterfs.git
<br>
Copyright (c) 2006-2011 Gluster Inc.
<a class="moz-txt-link-rfc2396E" href="http://www.gluster.com"><http://www.gluster.com></a>
<br>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
<br>
You may redistribute copies of GlusterFS under the terms of the
GNU General Public License.
<br>
<br>
<br>
<br>
-----------------------------------------
<br>
[root@mseas-data2 ~]# gluster volume info
<br>
-----------------------------------------
<br>
Volume Name: data-volume
<br>
Type: Distribute
<br>
Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
<br>
Status: Started
<br>
Number of Bricks: 4
<br>
Transport-type: tcp
<br>
Bricks:
<br>
Brick1: mseas-data2:/mnt/brick1
<br>
Brick2: mseas-data2:/mnt/brick2
<br>
Brick3: mseas-data3:/export/sda/brick3
<br>
Brick4: mseas-data3:/export/sdc/brick4
<br>
Options Reconfigured:
<br>
nfs.export-volumes: off
<br>
nfs.disable: on
<br>
performance.readdir-ahead: on
<br>
diagnostics.brick-sys-log-level: WARNING
<br>
nfs.exports-auth-enable: on
<br>
server.allow-insecure: on
<br>
auth.allow: *
<br>
disperse.eager-lock: off
<br>
performance.open-behind: off
<br>
performance.md-cache-timeout: 60
<br>
network.inode-lru-limit: 50000
<br>
diagnostics.client-log-level: ERROR
<br>
<br>
<br>
<br>
--------------------------------------------------------------
<br>
[root@mseas-data2 ~]# gluster volume status data-volume detail
<br>
--------------------------------------------------------------
<br>
Status of volume: data-volume
<br>
------------------------------------------------------------------------------
<br>
Brick : Brick mseas-data2:/mnt/brick1
<br>
TCP Port : 49154
<br>
RDMA Port : 0
<br>
Online : Y
<br>
Pid : 4601
<br>
File System : xfs
<br>
Device : /dev/sda
<br>
Mount Options : rw
<br>
Inode Size : 256
<br>
Disk Space Free : 318.8GB
<br>
Total Disk Space : 163.7TB
<br>
Inode Count : 1365878288
<br>
Free Inodes : 1337173596
<br>
------------------------------------------------------------------------------
<br>
Brick : Brick mseas-data2:/mnt/brick2
<br>
TCP Port : 49155
<br>
RDMA Port : 0
<br>
Online : Y
<br>
Pid : 7949
<br>
File System : xfs
<br>
Device : /dev/sdb
<br>
Mount Options : rw
<br>
Inode Size : 256
<br>
Disk Space Free : 319.8GB
<br>
Total Disk Space : 163.7TB
<br>
Inode Count : 1372421408
<br>
Free Inodes : 1341219039
<br>
------------------------------------------------------------------------------
<br>
Brick : Brick mseas-data3:/export/sda/brick3
<br>
TCP Port : 49153
<br>
RDMA Port : 0
<br>
Online : Y
<br>
Pid : 4650
<br>
File System : xfs
<br>
Device : /dev/sda
<br>
Mount Options : rw
<br>
Inode Size : 512
<br>
Disk Space Free : 325.3GB
<br>
Total Disk Space : 91.0TB
<br>
Inode Count : 692001992
<br>
Free Inodes : 682188893
<br>
------------------------------------------------------------------------------
<br>
Brick : Brick mseas-data3:/export/sdc/brick4
<br>
TCP Port : 49154
<br>
RDMA Port : 0
<br>
Online : Y
<br>
Pid : 23772
<br>
File System : xfs
<br>
Device : /dev/mapper/vg_Data4-lv_Data4
<br>
Mount Options : rw
<br>
Inode Size : 256
<br>
Disk Space Free : 3.4TB
<br>
Total Disk Space : 90.9TB
<br>
Inode Count : 3906272768
<br>
Free Inodes : 3894809903
<br>
<br>
</blockquote>
<pre class="moz-signature" cols="72">--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: <a class="moz-txt-link-abbreviated" href="mailto:phaley@mit.edu">phaley@mit.edu</a>
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Engineering Fax: (617) 253-8125
MIT, Room 5-213 <a class="moz-txt-link-freetext" href="http://web.mit.edu/phaley/www/">http://web.mit.edu/phaley/www/</a>
77 Massachusetts Avenue
Cambridge, MA 02139-4301
</pre>
</body>
</html>