<div dir="ltr">Thank you for the acknowledgement.</div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 4, 2017 at 6:39 PM, lejeczek <span dir="ltr"><<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">yes, I see things got lost in transit, I said before:<span class=""><br>
<br>
I did from first time and now not rejected.<br>
now I'm restarting fourth(newly added) peer's glusterd<br></span>
and.. it seems to work. <- HERE! (even though....<br>
<br>
and then I asked:<span class=""><br>
<br>
I there anything I should double check & make sure all<br>
is 100% fine before I use that newly added peer for<br>
bricks?<br>
<br></span>
below is my full message. Basically, new peers do not get rejected any more.<span class=""><br>
<br>
<br>
On 04/09/17 13:56, Gaurav Yadav wrote:<br>
</span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<br>
Executing "gluster volume set all cluster.op-version <op-version>"on all the existing nodes will solve this problem.<br>
<br>
If issue still persists please provide me following logs (working-cluster + newly added peer)<br></span>
1. <a href="http://glusterd.info" rel="noreferrer" target="_blank">glusterd.info</a> <<a href="http://glusterd.info" rel="noreferrer" target="_blank">http://glusterd.info</a>> file from /var/lib/glusterd from all nodes<span class=""><br>
2. glusterd.logs from all nodes<br>
3. info file from all the nodes.<br>
4. cmd-history from all the nodes.<br>
<br></span>
Thanks<br>
Gaurav<span class=""><br>
<br>
On Mon, Sep 4, 2017 at 2:09 PM, lejeczek <<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a> <mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>>> wrote:<br>
<br>
I do not see, did you write anything?<br>
<br>
On 03/09/17 11:54, Gaurav Yadav wrote:<br>
<br>
<br>
<br>
On Fri, Sep 1, 2017 at 9:02 PM, lejeczek<br>
<<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a> <mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>><br></span><span class="">
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>>>> wrote:<br>
<br></span><div><div class="h5">
you missed my reply before?<br>
here:<br>
<br>
now, a "weir" thing<br>
<br>
I did that, still fourth peer rejected, still<br>
fourth<br>
probe would fail to restart(all after upping<br>
to 31004)<br>
I redone, wiped and re-probed from a different<br>
peer<br>
than I did from first time and now not rejected.<br>
now I'm restarting fourth(newly added) peer's<br>
glusterd<br>
and.. it seems to work.(even though<br>
tier-enabled=0 is<br>
still there, now on all four peers, was not<br>
there on<br>
three before working peers)<br>
<br>
I there anything I should double check & make<br>
sure all<br>
is 100% fine before I use that newly added<br>
peer for<br>
bricks?<br>
<br>
For this only I need logs to see what has<br>
gone wrong.<br>
<br>
<br>
Please provide me following things<br>
(working-cluster + newly added peer)<br>
1. <a href="http://glusterd.info" rel="noreferrer" target="_blank">glusterd.info</a> <<a href="http://glusterd.info" rel="noreferrer" target="_blank">http://glusterd.info</a>><br>
<<a href="http://glusterd.info" rel="noreferrer" target="_blank">http://glusterd.info</a>> <<a href="http://glusterd.info" rel="noreferrer" target="_blank">http://glusterd.info</a>> file<br>
from /var/lib/glusterd from all nodes<br>
2. glusterd.logs from all nodes<br>
3. info file from all the nodes.<br>
4. cmd-history from all the nodes.<br>
<br>
<br>
On 01/09/17 11:11, Gaurav Yadav wrote:<br>
<br>
I replicate the problem locally and with<br>
the steps<br>
I suggested you, it worked for me...<br>
<br>
Please provide me following things<br>
(working-cluster + newly added peer)<br>
1. <a href="http://glusterd.info" rel="noreferrer" target="_blank">glusterd.info</a> <<a href="http://glusterd.info" rel="noreferrer" target="_blank">http://glusterd.info</a>><br>
<<a href="http://glusterd.info" rel="noreferrer" target="_blank">http://glusterd.info</a>><br>
<<a href="http://glusterd.info" rel="noreferrer" target="_blank">http://glusterd.info</a>> file from<br>
/var/lib/glusterd<br>
from all nodes<br>
2. glusterd.logs from all nodes<br>
3. info file from all the nodes.<br>
4. cmd-history from all the nodes.<br>
<br>
<br>
On Fri, Sep 1, 2017 at 3:39 PM, lejeczek<br>
<<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>>><br></div></div><div><div class="h5">
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>>>><wbr>> wrote:<br>
<br>
Like I said, I upgraded from 3.8 to 3.10 a<br>
while ago,<br>
at the moment 3.10.5, only now with<br>
3.10.5 I<br>
tried to<br>
add a peer.<br>
<br>
On 01/09/17 10:51, Gaurav Yadav wrote:<br>
<br>
What is gluster --version on all<br>
these nodes?<br>
<br>
On Fri, Sep 1, 2017 at 3:18 PM,<br>
lejeczek<br>
<<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>>><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>>>><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>>><br>
<br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a><br>
<mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>>>><wbr>>> wrote:<br>
<br>
on first node I got<br>
$ gluster volume set all<br>
cluster.op-version 31004<br>
volume set: failed: Commit<br>
failed on<br>
10.5.6.49. Please<br>
check log file for details.<br>
<br>
but I immediately proceeded to<br>
remaining nodes<br>
and:<br>
<br>
$ gluster volume get all<br>
cluster.op-version<br>
Option Value<br>
------ -----<br>
cluster.op-version 30712<br>
$ gluster volume set all<br>
cluster.op-version 31004<br>
volume set: failed: Required<br>
op-version<br>
(31004) should<br>
not be equal or lower than<br>
current cluster<br>
op-version<br>
(31004).<br>
$ gluster volume get all<br>
cluster.op-version<br>
Option Value<br>
------ -----<br>
cluster.op-version 31004<br>
<br>
last, third node:<br>
<br>
$ gluster volume get all<br>
cluster.op-version<br>
Option Value<br>
------ -----<br>
cluster.op-version 30712<br>
$ gluster volume set all<br>
cluster.op-version 31004<br>
volume set: failed: Required<br>
op-version<br>
(31004) should<br>
not be equal or lower than<br>
current cluster<br>
op-version<br>
(31004).<br>
$ gluster volume get all<br>
cluster.op-version<br>
Option Value<br>
------ -----<br>
cluster.op-version 31004<br>
<br>
So, even though it failed as<br>
above,<br>
now I see that<br>
it's 31004 on all three peers,<br>
at least<br>
according to<br>
"volume get all<br>
cluster.op-version"<br>
command.<br>
<br>
<br>
On 01/09/17 10:38, Gaurav<br>
Yadav wrote:<br>
<br>
gluster volume set all<br>
cluster.op-version<br>
31004<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
</div></div></blockquote>
<br>
</blockquote></div><br></div>