<div dir="ltr"><div>Would it be possible for you to pass the rebalance log file on the node from which you want to remove the brick? (location : /var/log/glusterfs/<volume-name-rebalance.log>)</div><div><br></div><div>+ the following information:<br></div><div> 1 - gluster volume info </div><div> 2 - gluster volume status</div><div> 2 - df -h output on all 3 nodes</div><div><br></div><div><br></div><div>Susant</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 12, 2019 at 3:08 PM Taste-Of-IT <<a href="mailto:kontakt@taste-of-it.de">kontakt@taste-of-it.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / Bricks. I want to remove one Brick and run gluster volume remove-brick <vol> <brickname> start. The Job completes and shows 11960 failures and only transfers 5TB out of 15TB Data. I have still files and folders on this volume on the brick to remove. I actually didnt run the final command with "commit". Both other Nodes have each over 6TB of free Space, so it can hold the remaininge Data from Brick3 theoretically.<br>
<br>Need help.<br>thx<br>Taste<br>_______________________________________________<br>Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div></div>