[Gluster-users] gluster remove-brick
Nithya Balachandran
nbalacha at redhat.com
Mon Feb 4 05:08:36 UTC 2019
Hi,
The status shows quite a few failures. Please check the rebalance logs to
see why that happened. We can decide what to do based on the errors.
Once you run a commit, the brick will no longer be part of the volume and
you will not be able to access those files via the client.
Do you have sufficient space on the remaining bricks for the files on the
removed brick?
Regards,
Nithya
On Mon, 4 Feb 2019 at 03:50, mohammad kashif <kashif.alig at gmail.com> wrote:
> Hi
>
> I have a pure distributed gluster volume with nine nodes and trying to
> remove one node, I ran
> gluster volume remove-brick atlasglust nodename:/glusteratlas/brick007/gv0
> start
>
> It completed but with around 17000 failures
>
> Node Rebalanced-files size scanned failures
> skipped status run time in h:m:s
> --------- ----------- -----------
> ----------- ----------- ----------- ------------
> --------------
> nodename 4185858 27.5TB 6746030
> 17488 0 completed 405:15:34
>
> I can see that there is still 1.5 TB of data on the node which I was
> trying to remove.
>
> I am not sure what to do now? Should I run remove-brick command again so
> the files which has been failed can be tried again?
>
> or should I run commit first and then try to remove node again?
>
> Please advise as I don't want to remove files.
>
> Thanks
>
> Kashif
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190204/262f332d/attachment.html>
More information about the Gluster-users
mailing list