[Gluster-users] Increase redundancy on existing disperse volume
list at nexusnebula.net
Tue Jul 31 04:14:16 UTC 2018
I'm working to convert my 3x3 arbiter replicated volume into a disperse
volume, however I have to work with the existing disks, maybe adding
another 1 or 2 new disks if necessary. I'm hoping to destroy the bricks on
one of the replicated nodes and build it into a
I'm opting to host this volume on a set of controllers connected to a
common backplane, I don't need help on this config, just on the constraints
of the disperse volumes.
I have some questions about the disperse functionality
1. If I create a 1 redundancy volume in the beginning, after I add more
bricks, can I increase redundancy to 2 or 3
2. If I create the original volume with 6TB bricks, am I really stuck with
6TB bricks even if I add 2 or more 10TB bricks
3. Is it required to extend a volume by the same number or bricks it was
created with? If the original volume is made with 3 bricks, do I have to
always add capacity in 3 brick increments?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users