<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 13, 2019 at 2:39 PM Taste-Of-IT <<a href="mailto:kontakt@taste-of-it.de">kontakt@taste-of-it.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Hi,<br>i stopped Operation remove-brick. Than upgraded Debian to Stretch, while the Gluster Repository for Jessie and GlusterFS 4.0 Latest throw an http 404 Error, which i could not fix in time. So i upgraded to Stretch and than to latest GlusterFS 4.02. Than i run remove-brick again which lead to the same error.<br><br>Brick1 and Brick2 has total Disk of 32,6TB, both have 3.3TB on free Disk now. Brick3 should remove with total of 16.3TB and free of 7.7TB. Files to remove are between a view KBs and over 40GB. So aprox 7TB has to move, which yes could not stored on 3,3TB*2, but as i understand rebalance should move files until free Diskspace on Brick1 and Brick2 is nearly Zero. Right? Ok, i will add a Temp-Disk and move xTB out of the Volume.<br><br>At all i think its still a Bug.<br></div></blockquote><div><br></div><div>Ok, then please file a bug with the details and we can discuss there.</div><div><br></div><div>Susant</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Thx.<span class="gmail-m_3381754288297216648felamimail-body-signature"></span><br><br>Am 13.03.2019 08:33:35, schrieb Susant Palai: <br><blockquote class="gmail-m_3381754288297216648felamimail-body-blockquote"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 12, 2019 at 5:16 PM Taste-Of-IT <<a href="mailto:kontakt@taste-of-it.de" target="_blank">kontakt@taste-of-it.de</a>> wrote:<br></div><blockquote class="gmail-m_3381754288297216648felamimail-body-blockquote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><span class="gmail-m_3381754288297216648gmail-m_-6632649138106951608felamimail-body-signature"></span>Hi Susant,</div><div><br></div><div>and thanks for your fast reply and pointing me to that log. So i was able to find the problem: "dht-rebalance.c:1052:__dht_check_free_space] 0-vol4-dht: Could not find any subvol with space accomodating the file"</div><div><br></div><div>But Volume Detail and df -h show xTB of free Disk Space and also Free Inodes. <br></div><div><br></div><div>Options Reconfigured:<br>performance.client-io-threads: on<br>storage.reserve: 0<br>performance.parallel-readdir: off<br>performance.readdir-ahead: off<br>auth.allow: 192.168.0.*<br>nfs.disable: off<br>transport.address-family: inet</div><div><br></div><div>Ok since there is enough disk space on other Bricks and i actually didnt complete brick-remove, can i rerun brick-remove to rebalance last Files and Folders?</div></div></blockquote><div><br></div><div>Ideally, the error should not have been seen with disk space available on the target nodes. You can start remove-brick again and it should move out the remaining set of files to the other bricks.</div><div> </div><blockquote class="gmail-m_3381754288297216648felamimail-body-blockquote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div><div>Thanks</div><div>Taste<br></div><div><br></div><div><br></div>Am 12.03.2019 10:49:13, schrieb Susant Palai: <br><blockquote class="gmail-m_3381754288297216648felamimail-body-blockquote"><div dir="ltr"><div>Would it be possible for you to pass the rebalance log file on the node from which you want to remove the brick? (location : /var/log/glusterfs/<volume-name-rebalance.log>)</div><div><br></div><div>+ the following information:<br></div><div> 1 - gluster volume info </div><div> 2 - gluster volume status</div><div> 2 - df -h output on all 3 nodes</div><div><br></div><div><br></div><div>Susant</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 12, 2019 at 3:08 PM Taste-Of-IT <<a href="mailto:kontakt@taste-of-it.de" target="_blank">kontakt@taste-of-it.de</a>> wrote:<br></div><blockquote class="gmail-m_3381754288297216648felamimail-body-blockquote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / Bricks. I want to remove one Brick and run gluster volume remove-brick <vol> <brickname> start. The Job completes and shows 11960 failures and only transfers 5TB out of 15TB Data. I have still files and folders on this volume on the brick to remove. I actually didnt run the final command with "commit". Both other Nodes have each over 6TB of free Space, so it can hold the remaininge Data from Brick3 theoretically.<br><br>Need help.<br>thx<br>Taste<br>_______________________________________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div></div>
</blockquote><br></div>_______________________________________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div>
</blockquote><br></div>_______________________________________________<br>Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div>