<div dir="ltr"><div dir="ltr"><div dir="ltr">Hi<div><br></div><div>I have a pure distributed gluster volume with nine nodes and trying to remove one node, I ran </div><div>gluster volume remove-brick atlasglust nodename:/glusteratlas/brick007/gv0 start<br></div><div><br></div><div>It completed but with around 17000 failures</div><div><br></div><div><div>   Node Rebalanced-files     size    scanned   failures    skipped        status run time in h:m:s</div><div>                ---------   -----------  -----------  -----------  -----------  -----------     ------------   --------------</div><div>     nodename     4185858    27.5TB    6746030     17488       0      completed   405:15:34</div></div><div><br></div><div>I can see that there is still 1.5 TB of data on the node which I was trying to remove.</div><div><br></div><div>I am not sure what to do now? Should I run remove-brick command again so the files which has been failed can be tried again?</div><div> </div><div>or should I run commit first and then try to remove node again?</div><div><br></div><div>Please advise as I don't want to remove files.</div><div><br></div><div>Thanks</div><div><br></div><div>Kashif</div><div><br></div><div><br></div><div><br></div></div></div></div>