<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<br>
<div class="moz-cite-prefix">On 01/02/17 19:30, lejeczek wrote:<br>
</div>
<blockquote
cite="mid:50197e6d-5f2e-cee8-591a-42945d61aa83@yahoo.co.uk"
type="cite">
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<br>
<br>
<div class="moz-cite-prefix">On 01/02/17 14:44, Atin Mukherjee
wrote:<br>
</div>
<blockquote
cite="mid:CAGNCGH3Kgs3J7E_uRTh_iHcz4Qmu8+_AuidkY2hzLV-dt0cAEA@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>I think you have hit <a moz-do-not-send="true"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1406411">https://bugzilla.redhat.com/show_bug.cgi?id=1406411</a>
which has been fixed in mainline and will be available in
release-3.10 which is slated for next month.<br>
<br>
</div>
<div>To prove you have hit the same problem can you please
confirm the following:<br>
<br>
</div>
<div>1. Which Gluster version are you running?<br>
</div>
<div>2. Was any of the existing brick down?<br>
</div>
<div>2. Did you mounted the volume? If not you have two ways
(1) bring up the brick and restart glusterd followed by
add-brick or (2) if the existing brick(s) is bad for some
reason, restarting glusterd and mounting the volume followed
by a look up and then attempting add-brick should succeed.<br>
<br>
</div>
<div>
<div>
<div>
<div class="gmail_extra"><br>
</div>
</div>
</div>
</div>
</div>
</blockquote>
a chance to properly investigate it has been lost I think.<br>
I all started with one peer I missed was not migrated from 3.7 to
3.8 and unfortunately it was a system I could not tamper with
until late evening, which is now.<br>
This problem though occurred after I already upgraded that gluster
to 3.8. I even removed that failing node's bricks and detached it,
re-attached it and still, those errors I described earlier...
until now when I restarted that one last one peer... now all seems
ok, well, at least I don't see those errors any more.<br>
<br>
Should I now be looking at something particular more closely?<br>
b.w.<br>
L.<br>
<br>
</blockquote>
I realized I might have not been clear there, that box where I
missed that upgrade 3.7->3.8 was not the one where problem
existed, it was a different box, if that changes/helps anything.<br>
<br>
<br>
<blockquote
cite="mid:50197e6d-5f2e-cee8-591a-42945d61aa83@yahoo.co.uk"
type="cite"> <br>
<blockquote
cite="mid:CAGNCGH3Kgs3J7E_uRTh_iHcz4Qmu8+_AuidkY2hzLV-dt0cAEA@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div class="gmail_extra">
<div class="gmail_quote">On Wed, Feb 1, 2017 at 7:49
PM, lejeczek <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:peljasz@yahoo.co.uk"
target="_blank">peljasz@yahoo.co.uk</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px
0px 0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF"> hi,<br>
<br>
I have a four peers gluster and one is failing,
well, kind of..<br>
If on a working peer I do:<br>
<br>
$ gluster volume add-brick QEMU-VMs replica 3
10.5.6.49:/__.aLocalStorages/<wbr>0/0-GLUSTERs/0GLUSTER-QEMU-VMs
force<br>
volume add-brick: failed: Commit failed on
whale.priv Please check log file for details.<br>
<br>
but:<br>
<br>
$ gluster vol info QEMU-VMs<br>
Volume Name: QEMU-VMs<br>
Type: Replicate<br>
Volume ID: 8709782a-daa5-4434-a816-<wbr>c4e0aef8fef2<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 10.5.6.100:/__.aLocalStorages/<wbr>1/0-GLUSTERs/1GLUSTER-QEMU-VMs<br>
Brick2: 10.5.6.17:/__.aLocalStorages/<wbr>1/0-GLUSTERs/QEMU-VMs<br>
Brick3: 10.5.6.49:/__.aLocalStorages/<wbr>0/0-GLUSTERs/0GLUSTER-QEMU-<wbr>VMs
# <= so it is here, also this command on that
failing peers reports correctly.<br>
<br>
Interestingly,<br>
<br>
$ gluster volume remove-brick<br>
<br>
removes no errors, but this change is not
propagated to the failing peer. Vol info still
reports its brick is part of the volume.<br>
<br>
And the failing completely part: every command
on failing peer reports:<br>
<br>
$ gluster volume remove-brick QEMU-VMs replica 2
10.5.6.49:/__.aLocalStorages/<wbr>0/0-GLUSTERs/0GLUSTER-QEMU-VMs
force<br>
Removing brick(s) can result in data loss. Do
you want to Continue? (y/n) y<br>
volume remove-brick commit force: failed: Commit
failed on 10.5.6.32. Please check log file for
details.<br>
Commit failed on rider.priv Please check log
file for details.<br>
Commit failed on 10.5.6.17. Please check log
file for details.<br>
<br>
I've been watching logs but honestly, don't know
which one(s) I should paste in here.<br>
b.w.<span class="gmail-HOEnZb"><font
color="#888888"><br>
L.<br>
<br>
</font></span></div>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a moz-do-not-send="true"
href="http://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div class="gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr"><br>
</div>
<div>~ Atin (atinm)<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>