[Gluster-devel] Phasing out replace-brick for data migration in favor of remove-brick.
Anand Avati
avati at gluster.org
Fri Sep 27 07:35:51 UTC 2013
Hello all,
DHT's remove-brick + rebalance has been enhanced in the last couple of
releases to be quite sophisticated. It can handle graceful decommissioning
of bricks, including open file descriptors and hard links.
This in a way is a feature overlap with replace-brick's data migration
functionality. Replace-brick's data migration is currently also used for
planned decommissioning of a brick.
Reasons to remove replace-brick (or why remove-brick is better):
- There are two methods of moving data. It is confusing for the users and
hard for developers to maintain.
- If server being replaced is a member of a replica set, neither
remove-brick nor replace-brick data migration is necessary, because
self-healing itself will recreate the data (replace-brick actually uses
self-heal internally)
- In a non-replicated config if a server is getting replaced by a new one,
add-brick <new> + remove-brick <old> "start" achieves the same goal as
replace-brick <old> <new> "start".
- In a non-replicated config, <replace-brick> is NOT glitch free
(applications witness ENOTCONN if they are accessing data) whereas
add-brick <new> + remove-brick <old> is completely transparent.
- Replace brick strictly requires a server with enough free space to hold
the data of the old brick, whereas remove-brick will evenly spread out the
data of the bring being removed amongst the remaining servers.
- Replace-brick code is complex and messy (the real reason :p).
- No clear reason why replace-brick's data migration is better in any way
to remove-brick's data migration.
I plan to send out patches to remove all traces of replace-brick data
migration code by 3.5 branch time.
NOTE that replace-brick command itself will still exist, and you can
replace on server with another in case a server dies. It is only the data
migration functionality being phased out.
Please do ask any questions / raise concerns at this stage :)
Avati
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20130927/07890419/attachment-0001.html>
More information about the Gluster-devel
mailing list