<div dir="ltr">Hi Victor,<div><br></div><div class="gmail_extra"><div class="gmail_quote">On Wed, Mar 14, 2018 at 12:30 AM, Victor T <span dir="ltr"><<a href="mailto:hero_of_nothing_1@hotmail.com" target="_blank">hero_of_nothing_1@hotmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div id="gmail-m_4006096338512438311divtagdefaultwrapper" style="font-size:12pt;color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif" dir="ltr">
<p style="margin-top:0px;margin-bottom:0px">I have a question about how disperse volumes handle brick failure. I'm running version 3.10.10 on all systems. If I have a disperse volume in a 4+2 configuration with 6 servers each serving 1 brick, and maintenance needs
to be performed on all systems, are there any general steps that need to be taken to ensure data is not lost or service interrupted? For example, can I just reboot each system sequentially after making sure sure the service is running on all servers before
rebooting the next system? Or is there a need to force/wait for a heal after each brick comes back online? If I have two bricks down for multiple days and then bring them back in, is there a need to issue a heal or something like a rebalance before rebooting
the other servers? There's lots of documentation about other volume types, but it seems information specific to dispersed volumes is a bit hard to find. Thanks a bunch.</p></div></div></blockquote><div><br></div><div>On a 4+2 configuration you could bring down up to 2 bricks simultaneously for maintenance. However if something happens to one of the remaining 4 bricks, the volume would stop working. So in this case I would recommend to not have more than one server down for maintenance at the same time unless the down time is very very small.</div><div><br></div><div>Once the stopped servers come back up again, you need to wait until all files are healed before proceeding with the next server. Failing to do so means that some files could have more than 2 non-healthy versions, what will make the file inaccessible until enough healthy versions are available again.</div><div><br></div><div>Self-heal should be automatically triggered once the bricks come online, however there was a bug (<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1547662">https://bugzilla.redhat.com/show_bug.cgi?id=1547662</a>) that could cause delays in the self-heal process. This bug should be fixed in the next version. Meantime you can force self-heal to progress by issuing "gluster volume heal <volname>" commands each time it seems to have stopped.</div><div><br></div><div>Once the output of "gluster volume heal <volname> info" reports 0 pending files on all bricks, you can proceed with the maintenance of the next server.</div><div><br></div><div>No need to do any rebalance for down bricks. Rebalance is basically needed when volume is expanded with more bricks.</div><div><br></div><div>Xavi</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div id="gmail-m_4006096338512438311divtagdefaultwrapper" style="font-size:12pt;color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif" dir="ltr">
</div>
</div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div></div>