[Gluster-users] Turn off replication

Karthik Subrahmanya ksubrahm at redhat.com
Fri Apr 6 09:49:46 UTC 2018


Hi Jose,

By switching into pure distribute volume you will lose availability if
something goes bad.

I am guessing you have a nX2 volume.
If you want to preserve one copy of the data in all the distributes, you
can do that by decreasing the replica count in the remove-brick operation.
If you have any inconsistency, heal them first using the "gluster volume
heal <volname>" command and wait till the
"gluster volume heal <volname> info" output becomes zero, before removing
the bricks, so that you will have the correct data.
If you do not want to preserve the data then you can directly remove the
bricks.
Even after removing the bricks the data will be present in the backend of
the removed bricks. You have to manually erase them (both data and
.glusterfs folder).
See [1] for more details on remove-brick.

[1].
https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#shrinking-volumes

HTH,
Karthik


On Thu, Apr 5, 2018 at 8:17 PM, Jose Sanchez <josesanc at carc.unm.edu> wrote:

>
> We have a Gluster setup with 2 nodes (distributed replication) and we
> would like to switch it to the distributed mode. I know the data is
> duplicated between those nodes, what is the proper way of switching it to a
> distributed, we would like to double or gain the storage space on our
> gluster storage node. what happens with the data, do i need to erase one of
> the nodes?
>
> Jose
>
>
> ---------------------------------
> Jose Sanchez
> Systems/Network Analyst
> Center of Advanced Research Computing
> 1601 Central Ave.
> MSC 01 1190
> Albuquerque, NM 87131-0001
> carc.unm.edu
> 575.636.4232
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180406/b5984dc5/attachment.html>


More information about the Gluster-users mailing list