[Gluster-users] Increase redundancy on existing disperse volume

Benjamin Kingston list at nexusnebula.net
Wed Aug 1 16:10:18 UTC 2018


Hello, I accidentally sent this question from an email that isn't
subscribed to the gluster-users list.
I resent from my mailing list address, but I don't see any of your answers
quoted here.
Thanks for your time, I've adjusted the mail recipients to avoid further
issues.

-ben


On Tue, Jul 31, 2018 at 8:02 PM Ashish Pandey <aspandey at redhat.com> wrote:

>
>
> I think I have replied all the questions you have asked.
> Let me know if you need any additional information.
>
> ---
> Ashish
> ------------------------------
> *From: *"Benjamin Kingston" <ben at nexusnebula.net>
> *To: *"gluster-users" <gluster-users at gluster.org>
> *Sent: *Tuesday, July 31, 2018 1:01:29 AM
> *Subject: *[Gluster-users] Increase redundancy on existing disperse volume
>
> I'm working to convert my 3x3 arbiter replicated volume into a disperse
> volume, however I have to work with the existing disks, maybe adding
> another 1 or 2 new disks if necessary. I'm hoping to destroy the bricks on
> one of the replicated nodes and build it into a
> I'm opting to host this volume on a set of controllers connected to a
> common backplane, I don't need help on this config, just on the constraints
> of the disperse volumes.
>
> I have some questions about the disperse functionality
> 1. If I create a 1 redundancy volume in the beginning, after I add more
> bricks, can I increase redundancy to 2 or 3
> 2. If I create the original volume with 6TB bricks, am I really stuck with
> 6TB bricks even if I add 2 or more 10TB bricks
> 3. Is it required to extend a volume by the same number or bricks it was
> created with? If the original volume is made with 3 bricks, do I have to
> always add capacity in 3 brick increments?
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180801/b025809b/attachment.html>


More information about the Gluster-users mailing list