<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On 11 October 2017 at 22:21, <span dir="ltr"><<a href="mailto:lemonnierk@ulrar.net" target="_blank">lemonnierk@ulrar.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">> corruption happens only in this cases:<br>
><br>
> - volume with shard enabled<br>
> AND<br>
> - rebalance operation<br>
><br>
<br>
</span>I believe so<br>
<span class=""><br>
> So, what If I have to replace a failed brick/disks ? Will this trigger<br>
> a rebalance and then corruption?<br>
><br>
> rebalance, is only needed when you have to expend a volume, ie by<br>
> adding more bricks ?<br>
<br>
</span>That's correct, replacing a brick shouldn't cause corruption, I've done<br>
it a few times without any problems. As long as you don't expand the<br>
cluster, you are fine.<br>
<br>
Basically you can add or remove replicas all you want, but you can't add<br>
new replica sets.<br></blockquote><div><br></div><div>Or remove a replica set. An add-brick will not trigger a rebalance - that needs to be done explicitly. However, a remove-brick will start the rebalance automatically.</div><div><br></div><div>Regards,</div><div>Nithya </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div></div>