<div dir="ltr"><div>I think you have hit <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1406411">https://bugzilla.redhat.com/show_bug.cgi?id=1406411</a> which has been fixed in mainline and will be available in release-3.10 which is slated for next month.<br><br></div><div>To prove you have hit the same problem can you please confirm the following:<br><br></div><div>1. Which Gluster version are you running?<br></div><div>2. Was any of the existing brick down?<br></div><div>2. Did you mounted the volume? If not you have two ways (1) bring up the brick and restart glusterd followed by add-brick or (2) if the existing brick(s) is bad for some reason, restarting glusterd and mounting the volume followed by a look up and then attempting add-brick should succeed.<br><br></div><div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 1, 2017 at 7:49 PM, lejeczek <span dir="ltr"><<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
hi,<br>
<br>
I have a four peers gluster and one is failing, well, kind of..<br>
If on a working peer I do:<br>
<br>
$ gluster volume add-brick QEMU-VMs replica 3
10.5.6.49:/__.aLocalStorages/<wbr>0/0-GLUSTERs/0GLUSTER-QEMU-VMs force<br>
volume add-brick: failed: Commit failed on whale.priv Please check
log file for details.<br>
<br>
but:<br>
<br>
$ gluster vol info QEMU-VMs<br>
Volume Name: QEMU-VMs<br>
Type: Replicate<br>
Volume ID: 8709782a-daa5-4434-a816-<wbr>c4e0aef8fef2<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 10.5.6.100:/__.aLocalStorages/<wbr>1/0-GLUSTERs/1GLUSTER-QEMU-VMs<br>
Brick2: 10.5.6.17:/__.aLocalStorages/<wbr>1/0-GLUSTERs/QEMU-VMs<br>
Brick3:
10.5.6.49:/__.aLocalStorages/<wbr>0/0-GLUSTERs/0GLUSTER-QEMU-<wbr>VMs #
<= so it is here, also this command on that failing peers reports
correctly.<br>
<br>
Interestingly,<br>
<br>
$ gluster volume remove-brick<br>
<br>
removes no errors, but this change is not propagated to the failing
peer. Vol info still reports its brick is part of the volume.<br>
<br>
And the failing completely part: every command on failing peer
reports:<br>
<br>
$ gluster volume remove-brick QEMU-VMs replica 2
10.5.6.49:/__.aLocalStorages/<wbr>0/0-GLUSTERs/0GLUSTER-QEMU-VMs force<br>
Removing brick(s) can result in data loss. Do you want to Continue?
(y/n) y<br>
volume remove-brick commit force: failed: Commit failed on
10.5.6.32. Please check log file for details.<br>
Commit failed on rider.priv Please check log file for details.<br>
Commit failed on 10.5.6.17. Please check log file for details.<br>
<br>
I've been watching logs but honestly, don't know which one(s) I
should paste in here.<br>
b.w.<span class="gmail-HOEnZb"><font color="#888888"><br>
L.<br>
<br>
</font></span></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><br></div><div>~ Atin (atinm)<br></div></div></div></div>
</div></div></div></div></div>