[Gluster-users] Add single server

Gandalf Corvotempesta gandalf.corvotempesta at gmail.com
Mon Apr 24 11:31:29 UTC 2017


2017-04-24 10:21 GMT+02:00 Pranith Kumar Karampuri <pkarampu at redhat.com>:
> At least in case of EC it is with good reason. If you want to change
> volume's configuration from 6+2->7+2 you have to compute the encoding again
> and place different data on the resulting 9 bricks. Which has to be done for
> all files. It is better to just create a new volume with 7+2 and just copy
> the files on to this volume and remove the original files on volume with
> 6+2.

Ok, for EC this makes sense.

> Didn't understand this math. If you want to add 2TB capacity to a volume
> that is 3-way replicated, you essentially need to add 6TB in whatever
> solution you have. At least 6TB with a single server. Which you can do even
> with Gluster.

Obviously, if you add a single 2TB disk in a replica 3, you won't get 2TB usable
space but only 1/3, about 600GB

> I think we had this discussion last July[1] with you that we can simulate
> the same things other storage solutions with metadata do by doing
> replace-bricks and rebalance. If you have a new server with 8 bricks then we
> can add a single server and make sure things are rebalanced with 6+2. Please
> note it is better to use data-bricks that is power of 2 like 4+2/8+2/16+4
> etc than 6+2.

This is an hugly workaround and very prone to errors.
Usually I prefere to not mess with my data, making multiple steps where
any other SDS has this natively.

Please take a look at LizardFS or MooseFS (or even Ceph). You can add
a single disk and
it will be automatically added and rebalanced without loosing
redudancy in any single phase.
If you add a 2TB disk on a replica 3, you'll end up automatically with +600GB
You can also choose which file must be replicated where and how, if
you need one replica on SSD
and other replica (for the same file) on HDD, this is possible, or
even one replica on local SSD, one replica
on hdd on the same datacenter and the third replica on HDD on the dark
side of the moon.

I don' think this would be possible with gluster with fixed "files
map" (I don't know the exact terms) because
the lack of metadata server doesn't allow you to know where a file is,
without assuring that is located in a
fixed position across the whole cluster

in gluster to archieve the same you have to run multiple commands,
respect the proper order of command
line arguments and so on. This is very, very, very risky and prone to errors.

This is not a battle between two SDS and I don't want to be pedant
(i'm just giving some suggestions),
but it's a fact that these SDS are way more flexible than gluster (and
on daily usage, far more cheaper)
I'm hoping that newer version of gluster brings some more flexibility
in brick placement/management.

> Are you suggesting this process to be easier through commands, rather than
> for administrators to figure out how to place the data?

Yes, this for sure. SDS must ensure data resiliency always, so,
whatever operation you'll do, data must be
always replicated properly. If you need to make some dangerous
operation, a "--force" must be used.


More information about the Gluster-users mailing list