<p dir="ltr">That seems a gluster control.</p>
<p dir="ltr">Still, for me the issue is quite obvious - you are at 100% (or almost)&nbsp; storage and you should rebalance your VMs.</p>
<p dir="ltr">Can you do a storage migration from the storage using gluster volumes at 100% to another storage whose gluster volumes are not so full (for example those that are 88% full)?</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
<div class="quote">On Sep 19, 2019 11:43, Stefan Solbrig &lt;stefan.solbrig@ur.de&gt; wrote:<br type='attribution'><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div>Dear all,</div><div><br /></div><div>I have a situation where &#34;mkdir&#34; on a client produces stale file handles.</div><div>This happened after upgrading from 3.12 to 6.5</div><div><br /></div><div>I believe I found the reason for it:</div><div>6.5 (but not 3.12) checks if there is space left on the device before doing a &#34;mkdir&#34;, but calculates the &#34;fullness&#34; in percent.   In my situations I have bricks that seem 100% full although there is plenty space left on the device (several GBytes, see listing below).  In this situation, a &#34;mkdir&#34; is not performed on bricks that are 100% full, but the &#34;mkdir&#34; succeeds from a user perspective.  Then, doing a &#34;ls&#34; on the recently created directory leads to the message &#34;stale file handle&#34;.</div><div><br /></div><div>I believe the call sequence is more or less this:</div><div><br /></div><div>server-rpc-fops.c:539:server_mkdir_cbk</div><div>server-rpc-fops.c:2666:server_mkdir_resume</div><div>server-rpc-fops.c:5242:server3_3_mkdir</div><div>posix-entry-ops.c:625:posix_mkdir</div><div>posix-helpers.c:2271</div><div><br /></div><div>My questions are:</div><div>* is it meant to operate in this way?</div><div>* is there a built-in way to fix the inconsistent directories? </div><div>(I tried creating the missing directories on the bricks by hand, which seemed to fix the issue, but I&#39;m not sure if this will introduce other problems.)</div><div><br /></div><div><br /></div><div>The obvious (good) fix would be to redistribute the data such that the 100% full bricks will have enough free space. However, if a user writes a really large file, the problem can re-occur any time... </div><div><br /></div><div>best wishes,</div><div>Stefan</div><div><br /></div><div><br /></div><div>PS:</div><div>File system listing.  Each file system is served as a brick, in a distribute-only system.</div><div><br /></div><div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">Filesystem                                       Size  Used Avail Use% Mounted on</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool06vd03-lvosb03pool06vd03   30T   27T  3.8T  88% /gl/lvosb03pool06vd03</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool06vd02-lvosb03pool06vd02   30T   27T  3.8T  88% /gl/lvosb03pool06vd02</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool06vd01-lvosb03pool06vd01   30T   27T  3.7T  88% /gl/lvosb03pool06vd01</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool01vd01-lvosb03pool01vd01   30T   30T  7.8G 100% /gl/lvosb03pool01vd01</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool01vd02-lvosb03pool01vd02   30T   30T   41G 100% /gl/lvosb03pool01vd02</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool01vd03-lvosb03pool01vd03   30T   29T  1.5T  96% /gl/lvosb03pool01vd03</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool01vd04-lvosb03pool01vd04   30T   30T   17G 100% /gl/lvosb03pool01vd04</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool02vd01-lvosb03pool02vd01   30T   30T   57G 100% /gl/lvosb03pool02vd01</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool02vd02-lvosb03pool02vd02   30T   30T   29G 100% /gl/lvosb03pool02vd02</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool02vd03-lvosb03pool02vd03   30T   30T   26G 100% /gl/lvosb03pool02vd03</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool02vd04-lvosb03pool02vd04   31T   31T  9.7G 100% /gl/lvosb03pool02vd04</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool03vd01-lvosb03pool03vd01   30T   30T   93G 100% /gl/lvosb03pool03vd01</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool03vd02-lvosb03pool03vd02   30T   30T   23G 100% /gl/lvosb03pool03vd02</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool03vd03-lvosb03pool03vd03   30T   30T  163G 100% /gl/lvosb03pool03vd03</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool03vd04-lvosb03pool03vd04   31T   30T  384G  99% /gl/lvosb03pool03vd04</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool04vd01-lvosb03pool04vd01   30T   29T  1.1T  97% /gl/lvosb03pool04vd01</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool04vd02-lvosb03pool04vd02   30T   27T  3.9T  88% /gl/lvosb03pool04vd02</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool04vd03-lvosb03pool04vd03   30T   29T  1.9T  94% /gl/lvosb03pool04vd03</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool04vd04-lvosb03pool04vd04   31T   29T  1.9T  94% /gl/lvosb03pool04vd04</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool05vd01-lvosb03pool05vd01   30T   28T  2.3T  93% /gl/lvosb03pool05vd01</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool05vd02-lvosb03pool05vd02   30T   27T  3.9T  88% /gl/lvosb03pool05vd02</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool05vd03-lvosb03pool05vd03   30T   27T  3.9T  88% /gl/lvosb03pool05vd03</div><div style="margin:0px;font-size:11px;line-height:normal;font-family:&#39;menlo&#39;;background-color:rgb( 255 , 248 , 232 )">/dev/mapper/vgosb03pool05vd04-lvosb03pool05vd04   31T   27T  3.9T  88% /gl/lvosb03pool05vd04</div></div><div><br /></div><br /><div>-- <br />Dr. Stefan Solbrig<br />Universität Regensburg, Fakultät für Physik,<br />93040 Regensburg, Germany<br />Tel &#43;49-941-943-2097<br /><br /></div><br /></div></blockquote></div>