<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>That make sense ^_^ <br>
</p>
<p>Unfortunately I haven't kept the interresting data you need. <br>
</p>
<p>Basically I had some write errors on my gluster clients when my
monitoring tool tested mkdir & create files.<br>
</p>
<p>The server's load was huge during the healing (cpu at 100%), and
the disk latency increased a lot. <br>
That may be the source of my write errors, we'll know for sure
next time... I'll keep & post all the data you asked.</p>
<p>No way on the client side to force the gluster mount on 1 peer ?<br>
</p>
<p>Thanks for your help Karthik!</p>
<p>Quentin<br>
</p>
<br>
<div class="moz-cite-prefix">Le 09/10/2017 à 12:10, Karthik
Subrahmanya a écrit :<br>
</div>
<blockquote type="cite"
cite="mid:CAHRDaUH5qyQSgxSjf+Lfe6f4WYUGJ9EqVsWaPPg3iKTrwvvedg@mail.gmail.com">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>Hi,<br>
<br>
</div>
There is no way to isolate the healing peer. Healing
happens from the good brick to the bad brick.<br>
</div>
I guess your replica bricks are on a different peers. If
you try to isolate the healing peer, it will stop the
healing process itself.<br>
<br>
</div>
What is the error you are getting while writing? It would
be helpful to debug the issue, if you can provide us the
output of the following commands:<br>
</div>
gluster volume info <vol_name><br>
</div>
gluster volume heal <vol_name> info<br>
</div>
And also provide the client & heal logs.<br>
<div>
<div>
<div><br>
</div>
<div>Thanks & Regards,<br>
</div>
<div>Karthik<br>
</div>
</div>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Oct 9, 2017 at 3:02 PM, ML <span
dir="ltr"><<a href="mailto:lists@websiteburo.com"
target="_blank" moz-do-not-send="true">lists@websiteburo.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Hi
everyone,<br>
<br>
I've been using gluster for a few month now, on a simple 2
peers replicated infrastructure, 22Tb each.<br>
<br>
One of the peers has been offline last week during 10 hours
(raid resync after a disk crash), and while my gluster
server was healing bricks, I could see some write errors on
my gluster clients.<br>
<br>
I couldn't find a way to isolate my healing peer, in the
documentation or anywhere else.<br>
<br>
Is there a way to avoid that ? Detach the peer while healing
? Some tunning on the client side maybe ?<br>
<br>
I'm using gluster 3.9 on debian 8.<br>
<br>
Thank you for your help.<br>
<br>
Quentin<br>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank"
moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a
href="http://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>