[Gluster-users] Removing Brick in Distributed GlusterFS

Taste-Of-IT kontakt at taste-of-it.de
Tue Mar 12 11:45:51 UTC 2019


Hi Susant,

and thanks for your fast reply and pointing me to that log. So i was able to find the problem: "dht-rebalance.c:1052:__dht_check_free_space] 0-vol4-dht: Could not find any subvol with space accomodating the file"

But Volume Detail and df -h show xTB of free Disk Space and also Free Inodes. 


Options Reconfigured:
performance.client-io-threads: on
storage.reserve: 0
performance.parallel-readdir: off
performance.readdir-ahead: off
auth.allow: 192.168.0.*
nfs.disable: off
transport.address-family: inet

Ok since there is enough disk space on other Bricks and i actually didnt complete brick-remove, can i rerun brick-remove to rebalance last Files and Folders?

Thanks
Taste


Am 12.03.2019 10:49:13, schrieb Susant Palai:

> Would it be possible for you to pass the rebalance log file on the node from which you want to remove the brick? (location : /var/log/glusterfs/<volume-name-rebalance.log>)
> 
> + the following information:
>  1 - gluster volume info 
> >  2 - gluster volume status
> >  2 - df -h output on all 3 nodes
> 

> Susant
> 
> On Tue, Mar 12, 2019 at 3:08 PM Taste-Of-IT <> kontakt at taste-of-it.de> > wrote:
> > Hi,
> > i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / Bricks.  I want to remove one Brick and run gluster volume remove-brick <vol> <brickname> start. The Job completes and shows 11960 failures and only transfers 5TB out of 15TB Data. I have still files and folders on this volume on the brick to remove. I actually didnt run the final command  with "commit". Both other Nodes have each over 6TB of free Space, so it can hold the remaininge Data from Brick3 theoretically.
> > 
> > Need help.
> > thx
> > Taste
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users

> > 



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190312/8785ec20/attachment.html>


More information about the Gluster-users mailing list