[Gluster-users] Some more questions

Jim Kinney jim.kinney at gmail.com
Wed May 9 19:22:20 UTC 2018

On Wed, 2018-05-09 at 18:26 +0000, Gandalf Corvotempesta wrote:
> Ok, some more question as I'm still planning our SDS (but I'm prone
> to use
> LizardFS, gluster is too inflexible)
> Let's assume a replica 3:
> 1) currently, is not possbile to add a single server and rebalance
> like any
> order SDS (Ceph, Lizard, Moose, DRBD, ....), right ? In replica 3, I
> have
> to add 3 new servers

You can change the replica count. Add a fourth server, add it's brick
to existing volume with gluster volume add-brick vol0 replica 4
> 2) The same should be by add disks on spare slots on existing
> servers.
> Always a multiple of replica count, thus 1 disk per server
> 3) Can I grow the cluster by replacing 3 disks with bigger ones? In
> example, with 12 2TB disks (on each server), I can replace 3 of them
> (1 per
> server) with 4TB to get more space, right ? Or should I replace *all*
> disks

I add space with new drives across all servers. So my replica 3 storage
cluster gets a new person and new space: add 3 new drives, one per
server, each drive is a new brick that is joined to create a replica 3
volume for that person. But if I need to expand the /home volume, then
I add 3 drives, one per machine, then add each drive to the raid array
on each machine for home.
You can also just add additional bricks to an existing volume (subject
to normal replica count rules). 
> ?
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180509/21894e03/attachment.html>

More information about the Gluster-users mailing list