<html><head></head><body><div class="ydpd55f6d20yahoo-style-wrap" style="font-family:courier new, courier, monaco, monospace, sans-serif;font-size:16px;"><div></div>
<div><span>Hi Karthik,</span></div><div><span><br></span></div><div><span>- the volume configuration you were using?</span></div><div><span>I used oVirt 4.2.6 Gluster Wizard, so I guess - we need to involve the oVirt devs here.</span><br></div><div><span>- why you wanted to replace your brick?</span></div><div><span>I have deployed the arbiter on another location as I thought I can deploy the Thin Arbiter (still waiting the docs to be updated), but once I realized that GlusterD doesn't support Thin Arbiter, I had to build another machine for a local arbiter - thus a replacement was needed.</span></div><div><span></span><span>- which brick(s) you tried replacing?</span></div><div><span>I was replacing the old arbiter with a new one</span></div><div><span></span><span>- what problem(s) did you face?</span></div><div><span>All oVirt VMs got paused due to I/O errors.</span></div><div><span><br></span></div><div><span>At the end, I have rebuild the whole setup and I never tried to replace the brick this way (used only reset-brick which didn't cause any issues).</span></div><div><span><br></span></div><div><span>As I mentioned that was on v3.12</span>, which is not the default for oVirt 4.3.x - so my guess is that it is OK now (current is v5.5).</div><div><br></div><div>Just sharing my experience.</div><div><br></div><div>Best Regards,</div><div>Strahil Nikolov<br></div><div><br></div>
</div><div id="ydp231feb39yahoo_quoted_5398519339" class="ydp231feb39yahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
В четвъртък, 11 април 2019 г., 0:53:52 ч. Гринуич-4, Karthik Subrahmanya <ksubrahm@redhat.com> написа:
</div>
<div><br></div>
<div><br></div>
<div><div id="ydp231feb39yiv0492786371"><div><div dir="ltr">Hi Strahil,<div><br clear="none"></div><div>Can you give us some more insights on</div><div>- the volume configuration you were using?</div><div>- why you wanted to replace your brick?</div><div>- which brick(s) you tried replacing?</div><div>- what problem(s) did you face?</div><div><br clear="none"></div><div>Regards,</div><div>Karthik</div></div><br clear="none"><div class="ydp231feb39yiv0492786371gmail_quote"><div class="ydp231feb39yiv0492786371gmail_attr" dir="ltr">On Thu, Apr 11, 2019 at 10:14 AM Strahil <<a shape="rect" href="mailto:hunter86_bg@yahoo.com" rel="nofollow" target="_blank">hunter86_bg@yahoo.com</a>> wrote:<div class="ydp231feb39yiv0492786371yqt9754414295" id="ydp231feb39yiv0492786371yqtfd74081"><br clear="none"></div></div><div class="ydp231feb39yiv0492786371yqt9754414295" id="ydp231feb39yiv0492786371yqtfd24276"><blockquote class="ydp231feb39yiv0492786371gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;"><p dir="ltr">Hi Karthnik,<br clear="none">
I used only once the brick replace function when I wanted to change my Arbiter (v3.12.15 in oVirt 4.2.7) and it was a complete disaster.<br clear="none">
Most probably I should have stopped the source arbiter before doing that, but the docs didn't mention it.</p>
<p dir="ltr">Thus I always use reset-brick, as it never let me down.</p>
<p dir="ltr">Best Regards,<br clear="none">
Strahil Nikolov</p>
<div class="ydp231feb39yiv0492786371gmail-m_-7140910045408783868quote">On Apr 11, 2019 07:34, Karthik Subrahmanya <<a shape="rect" href="mailto:ksubrahm@redhat.com" rel="nofollow" target="_blank">ksubrahm@redhat.com</a>> wrote:<br clear="none"><blockquote class="ydp231feb39yiv0492786371gmail-m_-7140910045408783868quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;"><div dir="ltr">Hi Strahil,<div><br clear="none"></div><div>Thank you for sharing your experience with reset-brick option.</div><div>Since he is using the gluster version 3.7.6, we do not have the reset-brick [1] option implemented there. It is introduced in 3.9.0. He has to go with replace-brick with the force option if he wants to use the same path & name for the new brick. </div><div>Yes, it is recommended to have the new brick to be of the same size as that of the other bricks.</div><div><br clear="none"></div><div>[1] <a shape="rect" href="https://docs.gluster.org/en/latest/release-notes/3.9.0/#introducing-reset-brick-command" rel="nofollow" target="_blank">https://docs.gluster.org/en/latest/release-notes/3.9.0/#introducing-reset-brick-command</a></div><div><br clear="none"></div><div>Regards,</div><div>Karthik</div></div><br clear="none"><div class="ydp231feb39yiv0492786371gmail-m_-7140910045408783868elided-text"><div dir="ltr">On Wed, Apr 10, 2019 at 10:31 PM Strahil <<a shape="rect" href="mailto:hunter86_bg@yahoo.com" rel="nofollow" target="_blank">hunter86_bg@yahoo.com</a>> wrote:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;">I have used reset-brick - but I have just changed the brick layout.<br clear="none">
You may give it a try, but I guess you need your new brick to have same amount of space (or more).<br clear="none">
<br clear="none">
Maybe someone more experienced should share a more sound solution.<br clear="none">
<br clear="none">
Best Regards,<br clear="none">
Strahil NikolovOn Apr 10, 2019 12:42, Martin Toth <<a shape="rect" href="mailto:snowmailer@gmail.com" rel="nofollow" target="_blank">snowmailer@gmail.com</a>> wrote:<br clear="none">
><br clear="none">
> Hi all,<br clear="none">
><br clear="none">
> I am running replica 3 gluster with 3 bricks. One of my servers failed - all disks are showing errors and raid is in fault state.<br clear="none">
><br clear="none">
> Type: Replicate<br clear="none">
> Volume ID: 41d5c283-3a74-4af8-a55d-924447bfa59a<br clear="none">
> Status: Started<br clear="none">
> Number of Bricks: 1 x 3 = 3<br clear="none">
> Transport-type: tcp<br clear="none">
> Bricks:<br clear="none">
> Brick1: node1.san:/tank/gluster/gv0imagestore/brick1<br clear="none">
> Brick2: node2.san:/tank/gluster/gv0imagestore/brick1 <— this brick is down<br clear="none">
> Brick3: node3.san:/tank/gluster/gv0imagestore/brick1<br clear="none">
><br clear="none">
> So one of my bricks is totally failed (node2). It went down and all data are lost (failed raid on node2). Now I am running only two bricks on 2 servers out from 3.<br clear="none">
> This is really critical problem for us, we can lost all data. I want to add new disks to node2, create new raid array on them and try to replace failed brick on this node.<br clear="none">
><br clear="none">
> What is the procedure of replacing Brick2 on node2, can someone advice? I can’t find anything relevant in documentation.<br clear="none">
><br clear="none">
> Thanks in advance,<br clear="none">
> Martin<br clear="none">
> _______________________________________________<br clear="none">
> Gluster-users mailing list<br clear="none">
> <a shape="rect" href="mailto:Gluster-users@gluster.org" rel="nofollow" target="_blank">Gluster-users@gluster.org</a><br clear="none">
> <a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="nofollow" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">
_______________________________________________<br clear="none">
Gluster-users mailing list<br clear="none">
<a shape="rect" href="mailto:Gluster-users@gluster.org" rel="nofollow" target="_blank">Gluster-users@gluster.org</a><br clear="none">
<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="nofollow" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div>
</blockquote></div></blockquote></div></div></div></div></div>
</div>
</div></body></html>