[Gluster-users] Replacing failed node (2node replication)

Kevin Lemonnier lemonnierk at ulrar.net
Sun May 1 18:43:06 UTC 2016

> With the commands I pasted above I had perfectly fine running volume
> which was accessible all the time during the re-adding of the new
> server, and also during the healing period (I'm using this for a
> HA-setup for a django application, which writes a lot of custom files
> while working - while the volume was being healied I made sure that all
> the webapp-traffic is hitting only glu-tru node, the one which haven't
> crashed).

The volume stays accessible, but the files being healed are locked.
That's probably why your app stayed online, web apps are usually a huge number
of small-ish files, so locking them during a heal is pretty much invisible (healing a
2 Kb file being almost instant).
If you had huge files on this, without sharding, it would have beem different :)

Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160501/f2ed2eb7/attachment.sig>

More information about the Gluster-users mailing list