[Gluster-devel] Just a thought, a better way to rebuild replica when some bricks go down rather than replace-brick
Pranith Kumar Karampuri
pkarampu at redhat.com
Sat May 27 05:55:42 UTC 2017
On Thu, May 25, 2017 at 11:35 AM, Jaden Liang <liangzijie at gmail.com> wrote:
> Hi all,
>
> As I know, glusterfs have to replace brick to rebuild replica when some
> bricks go down. In most commercial distributed storage system, there is a
> key spec that indicates how fast to rebuild data when some components
> broke. In glusterfs, the replace-brick operation only use 1 on 1 to rebuild
> replica, this can not use the cluster disks performance to increase the
> rebuild job that some companies called it RAID2.0. Therefore, I have some
> thoughts what to discuss.
>
> Glusterfs use one single storage graph in a volume, like M x N
> distributed-replicated volume. This storage graph is global for all files
> in the same volume. From what I know in VMWare vSAN, vSAN use different
> graphs for different files, which means, every file has its own storage
> graph. In this case, file replica rebuild or file rebalance could do mush
> flexible than single global graph. If some brick goes down, it can just
> modify those storage graphs of files which lost replica, then rebuild can
> be run which replace-brick operations.
>
This requires architecture change where we know the location of each file
rather than each brick.
>
> Just a thought, any suggestion would be great!
>
There are efforts under way to make self-heal comparable to rsync, would
that help?
>
> Best regards,
> Jaden Liang
> 5/25/2017
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170527/6ab6d782/attachment.html>
More information about the Gluster-devel
mailing list