<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 12, 2019 at 8:48 PM Taste-Of-IT <<a href="mailto:kontakt@taste-of-it.de">kontakt@taste-of-it.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Hi,<br><br><div>i found a Bug about this in Version 3.10. I run 3.13.2 - for your Information. As far as i can see, the default of 1% rule is active and not configure 0 = for disable storage.reserve.<br></div><div><br></div></div></blockquote><div>Let me verify this bug on release 6 and will update you. (But my recommendation will be to not disable it as that could lead to other problems.)</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div></div><div>So what can i do? Finish remove brick? Upgrade to newer Version and rerun rebalance? <br></div><div><br></div><div>thx</div><div>Taste<br></div><br>Am 12.03.2019 12:45:51, schrieb Taste-Of-IT: <br><blockquote class="gmail-m_1135017385456193970felamimail-body-blockquote"><div><span class="gmail-m_1135017385456193970felamimail-body-signature"></span>Hi Susant,</div><div><br></div><div>and thanks for your fast reply and pointing me to that log. So i was able to find the problem: "dht-rebalance.c:1052:__dht_check_free_space] 0-vol4-dht: Could not find any subvol with space accomodating the file"</div><div><br></div><div>But Volume Detail and df -h show xTB of free Disk Space and also Free Inodes. <br></div><div><br></div><div>Options Reconfigured:<br>performance.client-io-threads: on<br>storage.reserve: 0<br>performance.parallel-readdir: off<br>performance.readdir-ahead: off<br>auth.allow: 192.168.0.*<br>nfs.disable: off<br>transport.address-family: inet</div><div><br></div><div>Ok since there is enough disk space on other Bricks and i actually didnt complete brick-remove, can i rerun brick-remove to rebalance last Files and Folders?</div><div><br></div><div>Thanks</div><div>Taste<br></div><div><br></div><div><br></div>Am 12.03.2019 10:49:13, schrieb Susant Palai: <br><blockquote class="gmail-m_1135017385456193970felamimail-body-blockquote"><div dir="ltr"><div>Would it be possible for you to pass the rebalance log file on the node from which you want to remove the brick? (location : /var/log/glusterfs/<volume-name-rebalance.log>)</div><div><br></div><div>+ the following information:<br></div><div> 1 - gluster volume info </div><div> 2 - gluster volume status</div><div> 2 - df -h output on all 3 nodes</div><div><br></div><div><br></div><div>Susant</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 12, 2019 at 3:08 PM Taste-Of-IT <<a href="mailto:kontakt@taste-of-it.de" target="_blank">kontakt@taste-of-it.de</a>> wrote:<br></div><blockquote class="gmail-m_1135017385456193970felamimail-body-blockquote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / Bricks. I want to remove one Brick and run gluster volume remove-brick <vol> <brickname> start. The Job completes and shows 11960 failures and only transfers 5TB out of 15TB Data. I have still files and folders on this volume on the brick to remove. I actually didnt run the final command with "commit". Both other Nodes have each over 6TB of free Space, so it can hold the remaininge Data from Brick3 theoretically.<br><br>Need help.<br>thx<br>Taste<br>_______________________________________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div></div>
</blockquote><br>_______________________________________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote><br></div>_______________________________________________<br>Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div>