[Gluster-users] Add single server
Pranith Kumar Karampuri
pkarampu at redhat.com
Mon May 1 18:55:49 UTC 2017
On Tue, May 2, 2017 at 12:20 AM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> 2017-05-01 20:43 GMT+02:00 Shyam <srangana at redhat.com>:
> > I do agree that for the duration a brick is replaced its replication
> count
> > is down by 1, is that your concern? In which case I do note that without
> (a)
> > above, availability is at risk during the operation. Which needs other
> > strategies/changes to ensure tolerance to errors/faults.
>
> Oh, yes, i've forgot this too.
>
> I don't know Ceph, but Lizard, when moving chunks across the cluster,
> does a copy, not a movement
> During the whole operation you'll end with some files/chunks
> replicated more than the requirement.
>
Replace-brick as a command is implemented with the goal of replacing a disk
that went bad. So the availability was already less. In 2013-2014 I
proposed that we do it by adding brick to just the replica set and increase
its replica-count just for that set once heal is complete we could remove
this brick. But at the point I didn't see any benefit to that approach,
because availability was already down by 1. But with all of this discussion
it seems like a good time to revive this idea. I saw that Shyam suggested
the same in the PR he mentioned before.
>
> If you have a replica 3, during the movement, some file get replica 4
> In Gluster the same operation will bring you replica 2.
>
> IMHO, this isn't a viable/reliable solution
>
> Any change to change "replace-brick" to increase the replica count
> during the operation ?
>
It can be done. We just need to find time to do this.
--
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170502/a2729c58/attachment.html>
More information about the Gluster-users
mailing list