[Gluster-devel] Progress on brick multiplexing
Pranith Kumar Karampuri
pkarampu at redhat.com
Sat Jul 16 02:21:50 UTC 2016
Just went through the commit message. I think similar to attaching if we
also have detaching, then we can simulate killing of bricks in afr using
this approach may be? Even remove brick can do the same I guess.
On Sat, Jul 16, 2016 at 12:09 AM, Jeff Darcy <jdarcy at redhat.com> wrote:
> For those who don't know, "brick multiplexing" is a term some of us have
> been using to mean running multiple brick "stacks" inside a single process
> with a single protocol/server instance. Discussion from a month or so ago
> is here:
>
> http://www.gluster.org/pipermail/gluster-devel/2016-June/049801.html
>
> Yes, I know I need to turn that into a real feature page. Multiplexing
> was originally scoped as a 4.0 feature, but has gained higher priority
> because many of the issues it addresses have turned out to be limiting
> factors in how many bricks or volumes we can support and people running
> container/hyperconverged systems are already chafing under those limits.
> In response, I've been working on this feature recently. I've just pushed
> a patch, which is far enough along to pass our smoke test.
>
> http://review.gluster.org/#/c/14763/
>
> While it does pass smoke, I know it would fail spectacularly in a full
> regression test - especially tests that involve killing bricks. There's
> still a *ton* of work to be done on this. However, having this much of the
> low-level infrastructure working gives me hope that work on the
> higher-level parts can proceed more swiftly. Interested parties are
> invited to check out the patch and suggest improvements. Thanks!
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160716/5368bdeb/attachment.html>
More information about the Gluster-devel
mailing list