<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Thanks D for the confirmation. I
understand that is how it would have happened, but we are still
not sure why it would create the directory only on s0 and not on
others. Having said that, it is possible for it to be only on s0
and not on others till s0 was restarted. So after 58 hours of the
repo being created, when glusterd was first restarted, that is
when it would have passed the volume's info to the other nodes
during handshake. Post this, every time you bring down glusterd on
s0 and delete the repo, it would get it back from other nodes in
the cluster, who now would have the repo. That is the only
scenario in which we would see the behaviour in which the volume
was initially present on s0 but is now present on other nodes.<br>
<br>
We will keep you updated as and when we find the reason behind it
creating the volume in s0 only. Thanks.<br>
<br>
Regards,<br>
Avra<br>
<br>
On 02/21/2017 09:02 PM, Gambit15 wrote:<br>
</div>
<blockquote
cite="mid:CAEfk3RXAOvfih7xZSOsqPKV+0GjxfcnZqZcL96n00A6van=CtQ@mail.gmail.com"
type="cite">
<div dir="ltr">Hi Avra,<br>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 21 February 2017 at 03:22, Avra
Sengupta <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:asengupt@redhat.com" target="_blank">asengupt@redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<div
class="gmail-m_-7177417682601481536m_-3709776080829440996moz-cite-prefix">Hi
D,<br>
<br>
We tried reproducing the issue with a similar setup
but were unable to do so. We are still investigating
it.<br>
<br>
I have another follow-up question. You said that the
repo exists only in s0? If that was the case, then
bringing glusterd down on s0 only, deleteing the
repo and starting glusterd once again would have
removed it. The fact that the repo is restored as
soon as glusterd restarts on s0, means that some
other node(s) in the cluster also has that repo and
is passing that information to the glusterd in s0
during handshake. Could you please confirm if any
other node apart from s0 has the particular
repo(/var/lib/glusterd/vols/da<wbr>ta-teste) or not.
Thanks.<br>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>I'll point out that this isn't a recurring issue.
It's the first time this has happened, and it's not
happened since. If it wasn't for the orphaned volume, I
wouldn't even have requested support.<br>
<br>
Huh, so, I've just rescanned all of the nodes, and the
volume is now appearing on all. That's very odd, as the
volume was "created" on Weds 15th & until the end of
the 17th it was still only appearing on s0 (both in the
volume list & in the vols directory).<br>
</div>
<div>Grepping the etc-glusterfs-glusterd.vol logs, the
first mention of the volume after the failures I posted
previously is the following...<br>
<br>
<br>
[2017-02-17 15:46:17.199193] W
[rpcsvc.c:265:rpcsvc_program_actor] 0-rpc-service: RPC
program not available (req 1298437 330) for <a
moz-do-not-send="true"
href="http://10.123.123.102:49008">10.123.123.102:49008</a><br>
[2017-02-17 15:46:17.199216] E
[rpcsvc.c:560:rpcsvc_check_and_reply_error] 0-rpcsvc:
rpc actor failed to complete successfully<br>
[2017-02-17 22:20:58.525036] I [MSGID: 106004]
[glusterd-handler.c:5219:__glusterd_peer_rpc_notify]
0-management: Peer <s3>
(<978c228a-86f8-48dc-89c1-c63914eaa9a4>), in state
<Peer in Cluster>, has<br>
disconnected from glusterd.<br>
[2017-02-17 22:20:58.525128] W
[glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x1deac)
[0x7f2a85517eac]
-->/usr/lib64/glusterfs/3.8.8/xlator/<br>
mgmt/glusterd.so(+0x27a58) [0x7f2a85521a58]
-->/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0xd09da)
[0x7f2a855ca9da] ) 0-management: Lock for vol data not
held<br>
[2017-02-17 22:20:58.525144] W [MSGID: 106118]
[glusterd-handler.c:5241:__glusterd_peer_rpc_notify]
0-management: Lock not released for data<br>
[2017-02-17 22:20:58.525171] W
[glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x1deac)
[0x7f2a85517eac]
-->/usr/lib64/glusterfs/3.8.8/xlator/<br>
mgmt/glusterd.so(+0x27a58) [0x7f2a85521a58]
-->/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0xd09da)
[0x7f2a855ca9da] ) 0-management: Lock for vol data-novo
not held<br>
[2017-02-17 22:20:58.525182] W [MSGID: 106118]
[glusterd-handler.c:5241:__glusterd_peer_rpc_notify]
0-management: Lock not released for data-novo<br>
[2017-02-17 22:20:58.525205] W
[glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x1deac)
[0x7f2a85517eac]
-->/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x27a58)
[0x7f2a85521a58]
-->/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0xd09da)
[0x7f2a855ca9da] ) 0-management: Lock for vol data-teste
not held<br>
[2017-02-17 22:20:58.525235] W [MSGID: 106118]
[glusterd-handler.c:5241:__glusterd_peer_rpc_notify]
0-management: Lock not released for data-teste<br>
[2017-02-17 22:20:58.525261] W
[glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x1deac)
[0x7f2a85517eac]
-->/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x27a58)
[0x7f2a85521a58]
-->/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0xd09da)
[0x7f2a855ca9da] ) 0-management: Lock for vol
data-teste2 not held<br>
[2017-02-17 22:20:58.525272] W [MSGID: 106118]
[glusterd-handler.c:5241:__glusterd_peer_rpc_notify]
0-management: Lock not released for data-teste2<br>
<br>
<br>
</div>
<div>That's 58 hours between the volume's failed creation
& its first sign of life...??<br>
</div>
<div><br>
</div>
<div>At the time when it was only appearing on s0, I tried
stopping glusterd on multiple occasions & deleting
the volume's directory within vols, but it always
returned as soon as I restarted glusterd.<br>
</div>
<div>I did this with the help of Joe on IRC at the time,
and he was also stumped (he suggested that the data was
possibly still being held in memory somewhere), so I'm
quite sure this wasn't simply an oversight on my part.<br>
<br>
</div>
<div>Anyway, many thanks for the help, and I'd be happy to
provide any logs if desired, however whilst knowing what
happened & why might be useful, all now seems to
have resolved itself.<br>
</div>
<div><br>
</div>
<div>Cheers,<br>
</div>
<div> Doug<br>
</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<div
class="gmail-m_-7177417682601481536m_-3709776080829440996moz-cite-prefix">
<br>
Regards,<br>
Avra
<div>
<div class="gmail-m_-7177417682601481536h5"><br>
<br>
On 02/20/2017 06:51 PM, Gambit15 wrote:<br>
</div>
</div>
</div>
<div>
<div class="gmail-m_-7177417682601481536h5">
<blockquote type="cite">
<div dir="ltr">Hi Avra,<br>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 20 February
2017 at 02:51, Avra Sengupta <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:asengupt@redhat.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:asengupt@redhat.com">asengupt@redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<div
class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix">Hi
D,<br>
<br>
It seems you tried to take a clone
of a snapshot, when that snapshot
was not activated.<br>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Correct. As per my commands, I then
noticed the issue, checked the
snapshot's status & activated it.
I included this in my command history
just to clear up any doubts from the
logs.<br>
<br>
</div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<div
class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix">
However in this scenario, the
cloned volume should not be in an
inconsistent state. I will try to
reproduce this and see if it's a
bug. Meanwhile could you please
answer the following queries:<br>
1. How many nodes were in the
cluster.<br>
</div>
</div>
</blockquote>
<div><br>
There are 4 nodes in a (2+1)x2 setup.<br>
</div>
<div>s0 replicates to s1, with an
arbiter on s2, and s2 replicates to
s3, with an arbiter on s0.<br>
</div>
<div><br>
</div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<div
class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix">
2. How many bricks does the
snapshot
data-bck_GMT-2017.02.09-14.15.<wbr>43
have?<br>
</div>
</div>
</blockquote>
<div> </div>
<div>6 bricks, including the 2 arbiters.<br>
<br>
</div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<div
class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix">
3. Was the snapshot clone command
issued from a node which did not
have any bricks for the snapshot
data-bck_GMT-2017.02.09-14.15.<wbr>43<br>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>All commands were issued from s0.
All volumes have bricks on every node
in the cluster.<br>
</div>
<div> </div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<div
class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix">
4. I see you tried to delete the
new cloned volume. Did the new
cloned volume land in this state
after failure to create the clone
or failure to delete the clone<br>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>I noticed there was something wrong
as soon as I created the clone. The
clone command completed, however I was
then unable to do anything with it
because the clone didn't exist on
s1-s3.<br>
</div>
<div> </div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<div
class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix">
<br>
If you want to remove the half
baked volume from the cluster
please proceed with the following
steps.<br>
1. bring down glusterd on all
nodes by running the following
command on all nodes<br>
$ systemctl stop glusterd.<br>
Verify that the glusterd is down
on all nodes by running the
following command on all nodes<br>
$ systemctl status glusterd.<br>
2. delete the following repo from
all the nodes (whichever nodes it
exists)<br>
/var/lib/glusterd/vols/data-te<wbr>ste<br>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>The repo only exists on s0, but
stoppping glusterd on only s0 &
deleting the directory didn't work,
the directory was restored as soon as
glusterd was restarted. I haven't yet
tried stopping glusterd on *all* nodes
before doing this, although I'll need
to plan for that, as it'll take the
entire cluster off the air.<br>
<br>
</div>
<div>Thanks for the reply,<br>
</div>
<div> Doug<br>
</div>
<div><br>
</div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<div
class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix">
<br>
Regards,<br>
Avra
<div>
<div
class="gmail-m_-7177417682601481536m_-3709776080829440996h5"><br>
<br>
On 02/16/2017 08:01 PM,
Gambit15 wrote:<br>
</div>
</div>
</div>
<blockquote type="cite">
<div>
<div
class="gmail-m_-7177417682601481536m_-3709776080829440996h5">
<div dir="ltr">
<div>
<div>
<div>Hey guys,<br>
</div>
I tried to create a new
volume from a cloned
snapshot yesterday,
however something went
wrong during the process
& I'm now stuck with
the new volume being
created on the server I
ran the commands on
(s0), but not on the
rest of the peers. I'm
unable to delete this
new volume from the
server, as it doesn't
exist on the peers.<br>
<br>
</div>
What do I do?<br>
</div>
Any insights into what may
have gone wrong?<br>
<div>
<div>
<div>
<div>
<div>
<div><br>
CentOS 7.3.1611</div>
<div>Gluster 3.8.8<br>
<br>
</div>
<div>The command
history &
extract from
etc-glusterfs-glusterd.vol.log
are included
below.<br>
</div>
<div><br>
gluster volume
list<br>
gluster snapshot
list<br>
gluster snapshot
clone data-teste
data-bck_GMT-2017.02.09-14.15.<wbr>43<br>
gluster volume
status
data-teste<br>
gluster volume
delete
data-teste<br>
gluster snapshot
create teste
data<br>
gluster snapshot
clone data-teste
teste_GMT-2017.02.15-12.44.04<br>
gluster snapshot
status<br>
gluster snapshot
activate
teste_GMT-2017.02.15-12.44.04<br>
gluster snapshot
clone data-teste
teste_GMT-2017.02.15-12.44.04<br>
<br>
<br>
[2017-02-15
12:43:21.667403]
I [MSGID:
106499]
[glusterd-handler.c:4349:__glu<wbr>sterd_handle_status_volume]
0-management:
Received status
volume req for
volume
data-teste<br>
[2017-02-15
12:43:21.682530]
E [MSGID:
106301]
[glusterd-syncop.c:1297:gd_sta<wbr>ge_op_phase]
0-management:
Staging of
operation
'Volume Status'
failed on
localhost :
Volume
data-teste is
not started<br>
[2017-02-15
12:43:43.633031]
I [MSGID:
106495]
[glusterd-handler.c:3128:__glu<wbr>sterd_handle_getwd]
0-glusterd:
Received getwd
req<br>
[2017-02-15
12:43:43.640597]
I
[run.c:191:runner_log]
(-->/usr/lib64/glusterfs/3.8.8<wbr>/xlator/mgmt/glusterd.so(+0xcc<wbr>4b2)
[0x7ffb396a14b2]
-->/usr/lib64/glusterfs/3.8.8/<wbr>xlator/mgmt/glusterd.so(+0xcbf<wbr>65)
[0x7ffb396a0f65]
-->/lib64/libglusterfs.so.0(ru<wbr>nner_log+0x115) [0x7ffb44ec31c5] )
0-management:
Ran script:
/var/lib/glusterd/hooks/1/dele<wbr>te/post/S57glusterfind-delete-<wbr>post
--volname=data-teste<br>
[2017-02-15
13:05:20.103423]
E [MSGID:
106122]
[glusterd-snapshot.c:2397:glus<wbr>terd_snapshot_clone_prevalidat<wbr>e]
0-management:
Failed to pre
validate<br>
[2017-02-15
13:05:20.103464]
E [MSGID:
106443]
[glusterd-snapshot.c:2413:glus<wbr>terd_snapshot_clone_prevalidat<wbr>e]
0-management:
One or more
bricks are not
running. Please
run snapshot
status command
to see brick
status.<br>
Please start the
stopped brick
and then issue
snapshot clone
command<br>
[2017-02-15
13:05:20.103481]
W [MSGID:
106443]
[glusterd-snapshot.c:8563:glus<wbr>terd_snapshot_prevalidate]
0-management:
Snapshot clone
pre-validation
failed<br>
[2017-02-15
13:05:20.103492]
W [MSGID:
106122]
[glusterd-mgmt.c:167:gd_mgmt_v<wbr>3_pre_validate_fn]
0-management:
Snapshot
Prevalidate
Failed<br>
[2017-02-15
13:05:20.103503]
E [MSGID:
106122]
[glusterd-mgmt.c:884:glusterd_<wbr>mgmt_v3_pre_validate]
0-management:
Pre Validation
failed for
operation
Snapshot on
local node<br>
[2017-02-15
13:05:20.103514]
E [MSGID:
106122]
[glusterd-mgmt.c:2243:glusterd<wbr>_mgmt_v3_initiate_snap_phases]
0-management:
Pre Validation
Failed<br>
[2017-02-15
13:05:20.103531]
E [MSGID:
106027]
[glusterd-snapshot.c:8118:glus<wbr>terd_snapshot_clone_postvalida<wbr>te]
0-management:
unable to find
clone data-teste
volinfo<br>
[2017-02-15
13:05:20.103542]
W [MSGID:
106444]
[glusterd-snapshot.c:9063:glus<wbr>terd_snapshot_postvalidate]
0-management:
Snapshot create
post-validation
failed<br>
[2017-02-15
13:05:20.103561]
W [MSGID:
106121]
[glusterd-mgmt.c:351:gd_mgmt_v<wbr>3_post_validate_fn]
0-management:
postvalidate
operation failed<br>
[2017-02-15
13:05:20.103572]
E [MSGID:
106121]
[glusterd-mgmt.c:1660:glusterd<wbr>_mgmt_v3_post_validate]
0-management:
Post Validation
failed for
operation
Snapshot on
local node<br>
[2017-02-15
13:05:20.103582]
E [MSGID:
106122]
[glusterd-mgmt.c:2363:glusterd<wbr>_mgmt_v3_initiate_snap_phases]
0-management:
Post Validation
Failed<br>
[2017-02-15
13:11:15.862858]
W [MSGID:
106057]
[glusterd-snapshot-utils.c:410<wbr>:glusterd_snap_volinfo_find]
0-management:
Snap volume
c3ceae3889484e96ab8bed69593cf6<wbr>d3.s0.run-gluster-snaps-c3ceae<wbr>3889484e96ab8bed69593cf6d3-bri<wbr>ck1-data-brick
not found
[Argumento
inválido]<br>
[2017-02-15
13:11:16.314759]
I [MSGID:
106143]
[glusterd-pmap.c:250:pmap_regi<wbr>stry_bind]
0-pmap: adding
brick
/run/gluster/snaps/c3ceae38894<wbr>84e96ab8bed69593cf6d3/brick1/d<wbr>ata/brick
on port 49452<br>
[2017-02-15
13:11:16.316090]
I
[rpc-clnt.c:1046:rpc_clnt_conn<wbr>ection_init]
0-management:
setting
frame-timeout to
600<br>
[2017-02-15
13:11:16.348867]
W [MSGID:
106057]
[glusterd-snapshot-utils.c:410<wbr>:glusterd_snap_volinfo_find]
0-management:
Snap volume
c3ceae3889484e96ab8bed69593cf6<wbr>d3.s0.run-gluster-snaps-c3ceae<wbr>3889484e96ab8bed69593cf6d3-bri<wbr>ck6-data-arbiter
not found
[Argumento
inválido]<br>
[2017-02-15
13:11:16.558878]
I [MSGID:
106143]
[glusterd-pmap.c:250:pmap_regi<wbr>stry_bind]
0-pmap: adding
brick
/run/gluster/snaps/c3ceae38894<wbr>84e96ab8bed69593cf6d3/brick6/d<wbr>ata/arbiter
on port 49453<br>
[2017-02-15
13:11:16.559883]
I
[rpc-clnt.c:1046:rpc_clnt_conn<wbr>ection_init]
0-management:
setting
frame-timeout to
600<br>
[2017-02-15
13:11:23.279721]
E [MSGID:
106030]
[glusterd-snapshot.c:4736:glus<wbr>terd_take_lvm_snapshot]
0-management:
taking snapshot
of the brick
(/run/gluster/snaps/c3ceae3889<wbr>484e96ab8bed69593cf6d3/brick1/<wbr>data/brick)
of device
/dev/mapper/v0.dc0.cte--g0-c3c<wbr>eae3889484e96ab8bed69593cf6d3_<wbr>0
failed<br>
[2017-02-15
13:11:23.279790]
E [MSGID:
106030]
[glusterd-snapshot.c:5135:glus<wbr>terd_take_brick_snapshot]
0-management:
Failed to take
snapshot of
brick
s0:/run/gluster/snaps/c3ceae38<wbr>89484e96ab8bed69593cf6d3/brick<wbr>1/data/brick<br>
[2017-02-15
13:11:23.279806]
E [MSGID:
106030]
[glusterd-snapshot.c:6484:glus<wbr>terd_take_brick_snapshot_task]
0-management:
Failed to take
backend snapshot
for brick
s0:/run/gluster/snaps/data-tes<wbr>te/brick1/data/brick
volume(data-teste)<br>
[2017-02-15
13:11:23.286678]
E [MSGID:
106030]
[glusterd-snapshot.c:4736:glus<wbr>terd_take_lvm_snapshot]
0-management:
taking snapshot
of the brick
(/run/gluster/snaps/c3ceae3889<wbr>484e96ab8bed69593cf6d3/brick6/<wbr>data/arbiter)
of device
/dev/mapper/v0.dc0.cte--g0-c3c<wbr>eae3889484e96ab8bed69593cf6d3_<wbr>1
failed<br>
[2017-02-15
13:11:23.286735]
E [MSGID:
106030]
[glusterd-snapshot.c:5135:glus<wbr>terd_take_brick_snapshot]
0-management:
Failed to take
snapshot of
brick
s0:/run/gluster/snaps/c3ceae38<wbr>89484e96ab8bed69593cf6d3/brick<wbr>6/data/arbiter<br>
[2017-02-15
13:11:23.286749]
E [MSGID:
106030]
[glusterd-snapshot.c:6484:glus<wbr>terd_take_brick_snapshot_task]
0-management:
Failed to take
backend snapshot
for brick
s0:/run/gluster/snaps/data-tes<wbr>te/brick6/data/arbiter
volume(data-teste)<br>
[2017-02-15
13:11:23.286793]
E [MSGID:
106030]
[glusterd-snapshot.c:6626:glus<wbr>terd_schedule_brick_snapshot]
0-management:
Failed to create
snapshot<br>
[2017-02-15
13:11:23.286813]
E [MSGID:
106441]
[glusterd-snapshot.c:6796:glus<wbr>terd_snapshot_clone_commit]
0-management:
Failed to take
backend snapshot
data-teste<br>
[2017-02-15
13:11:25.530666]
E [MSGID:
106442]
[glusterd-snapshot.c:8308:glus<wbr>terd_snapshot]
0-management:
Failed to clone
snapshot<br>
[2017-02-15
13:11:25.530721]
W [MSGID:
106123]
[glusterd-mgmt.c:272:gd_mgmt_v<wbr>3_commit_fn]
0-management:
Snapshot Commit
Failed<br>
[2017-02-15
13:11:25.530735]
E [MSGID:
106123]
[glusterd-mgmt.c:1427:glusterd<wbr>_mgmt_v3_commit]
0-management:
Commit failed
for operation
Snapshot on
local node<br>
[2017-02-15
13:11:25.530749]
E [MSGID:
106123]
[glusterd-mgmt.c:2304:glusterd<wbr>_mgmt_v3_initiate_snap_phases]
0-management:
Commit Op Failed<br>
[2017-02-15
13:11:25.532312]
E [MSGID:
106027]
[glusterd-snapshot.c:8118:glus<wbr>terd_snapshot_clone_postvalida<wbr>te]
0-management:
unable to find
clone data-teste
volinfo<br>
[2017-02-15
13:11:25.532339]
W [MSGID:
106444]
[glusterd-snapshot.c:9063:glus<wbr>terd_snapshot_postvalidate]
0-management:
Snapshot create
post-validation
failed<br>
[2017-02-15
13:11:25.532353]
W [MSGID:
106121]
[glusterd-mgmt.c:351:gd_mgmt_v<wbr>3_post_validate_fn]
0-management:
postvalidate
operation failed<br>
[2017-02-15
13:11:25.532367]
E [MSGID:
106121]
[glusterd-mgmt.c:1660:glusterd<wbr>_mgmt_v3_post_validate]
0-management:
Post Validation
failed for
operation
Snapshot on
local node<br>
[2017-02-15
13:11:25.532381]
E [MSGID:
106122]
[glusterd-mgmt.c:2363:glusterd<wbr>_mgmt_v3_initiate_snap_phases]
0-management:
Post Validation
Failed<br>
[2017-02-15
13:29:53.779020]
E [MSGID:
106062]
[glusterd-snapshot-utils.c:239<wbr>1:glusterd_snap_create_use_rsp<wbr>_dict]
0-management:
failed to get
snap UUID<br>
[2017-02-15
13:29:53.779073]
E [MSGID:
106099]
[glusterd-snapshot-utils.c:250<wbr>7:glusterd_snap_use_rsp_dict]
0-glusterd:
Unable to use
rsp dict<br>
[2017-02-15
13:29:53.779096]
E [MSGID:
106108]
[glusterd-mgmt.c:1305:gd_mgmt_<wbr>v3_commit_cbk_fn]
0-management:
Failed to
aggregate
response from
node/brick<br>
[2017-02-15
13:29:53.779136]
E [MSGID:
106116]
[glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]
0-management:
Commit failed on
s3. Please check
log file for
details.<br>
[2017-02-15
13:29:54.136196]
E [MSGID:
106116]
[glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]
0-management:
Commit failed on
s1. Please check
log file for
details.<br>
The message "E
[MSGID: 106108]
[glusterd-mgmt.c:1305:gd_mgmt_<wbr>v3_commit_cbk_fn] 0-management:
Failed to
aggregate
response from
node/brick"
repeated 2 times
between
[2017-02-15
13:29:53.779096]
and [2017-02-15
13:29:54.535080]<br>
[2017-02-15
13:29:54.535098]
E [MSGID:
106116]
[glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]
0-management:
Commit failed on
s2. Please check
log file for
details.<br>
[2017-02-15
13:29:54.535320]
E [MSGID:
106123]
[glusterd-mgmt.c:1490:glusterd<wbr>_mgmt_v3_commit]
0-management:
Commit failed on
peers<br>
[2017-02-15
13:29:54.535370]
E [MSGID:
106123]
[glusterd-mgmt.c:2304:glusterd<wbr>_mgmt_v3_initiate_snap_phases]
0-management:
Commit Op Failed<br>
[2017-02-15
13:29:54.539708]
E [MSGID:
106116]
[glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]
0-management:
Post Validation
failed on s1.
Please check log
file for
details.<br>
[2017-02-15
13:29:54.539797]
E [MSGID:
106116]
[glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]
0-management:
Post Validation
failed on s3.
Please check log
file for
details.<br>
[2017-02-15
13:29:54.539856]
E [MSGID:
106116]
[glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]
0-management:
Post Validation
failed on s2.
Please check log
file for
details.<br>
[2017-02-15
13:29:54.540224]
E [MSGID:
106121]
[glusterd-mgmt.c:1713:glusterd<wbr>_mgmt_v3_post_validate]
0-management:
Post Validation
failed on peers<br>
[2017-02-15
13:29:54.540256]
E [MSGID:
106122]
[glusterd-mgmt.c:2363:glusterd<wbr>_mgmt_v3_initiate_snap_phases]
0-management:
Post Validation
Failed<br>
The message "E
[MSGID: 106062]
[glusterd-snapshot-utils.c:239<wbr>1:glusterd_snap_create_use_rsp<wbr>_dict]
0-management:
failed to get
snap UUID"
repeated 2 times
between
[2017-02-15
13:29:53.779020]
and [2017-02-15
13:29:54.535075]<br>
The message "E
[MSGID: 106099]
[glusterd-snapshot-utils.c:250<wbr>7:glusterd_snap_use_rsp_dict]
0-glusterd:
Unable to use
rsp dict"
repeated 2 times
between
[2017-02-15
13:29:53.779073]
and [2017-02-15
13:29:54.535078]<br>
[2017-02-15
13:31:14.285666]
I [MSGID:
106488]
[glusterd-handler.c:1537:__glu<wbr>sterd_handle_cli_get_volume]
0-management:
Received get vol
req<br>
[2017-02-15
13:32:17.827422]
E [MSGID:
106027]
[glusterd-handler.c:4670:glust<wbr>erd_get_volume_opts]
0-management:
Volume
cluster.locking-scheme
does not exist<br>
[2017-02-15
13:34:02.635762]
E [MSGID:
106116]
[glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]
0-management:
Pre Validation
failed on s1.
Volume
data-teste does
not exist<br>
[2017-02-15
13:34:02.635838]
E [MSGID:
106116]
[glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]
0-management:
Pre Validation
failed on s2.
Volume
data-teste does
not exist<br>
[2017-02-15
13:34:02.635889]
E [MSGID:
106116]
[glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]
0-management:
Pre Validation
failed on s3.
Volume
data-teste does
not exist<br>
[2017-02-15
13:34:02.636092]
E [MSGID:
106122]
[glusterd-mgmt.c:947:glusterd_<wbr>mgmt_v3_pre_validate]
0-management:
Pre Validation
failed on peers<br>
[2017-02-15
13:34:02.636132]
E [MSGID:
106122]
[glusterd-mgmt.c:2009:glusterd<wbr>_mgmt_v3_initiate_all_phases]
0-management:
Pre Validation
Failed<br>
[2017-02-15
13:34:20.313228]
E [MSGID:
106153]
[glusterd-syncop.c:113:gd_coll<wbr>ate_errors]
0-glusterd:
Staging failed
on s2. Error:
Volume
data-teste does
not exist<br>
[2017-02-15
13:34:20.313320]
E [MSGID:
106153]
[glusterd-syncop.c:113:gd_coll<wbr>ate_errors]
0-glusterd:
Staging failed
on s1. Error:
Volume
data-teste does
not exist<br>
[2017-02-15
13:34:20.313377]
E [MSGID:
106153]
[glusterd-syncop.c:113:gd_coll<wbr>ate_errors]
0-glusterd:
Staging failed
on s3. Error:
Volume
data-teste does
not exist<br>
[2017-02-15
13:34:36.796455]
E [MSGID:
106153]
[glusterd-syncop.c:113:gd_coll<wbr>ate_errors]
0-glusterd:
Staging failed
on s1. Error:
Volume
data-teste does
not exist<br>
[2017-02-15
13:34:36.796830]
E [MSGID:
106153]
[glusterd-syncop.c:113:gd_coll<wbr>ate_errors]
0-glusterd:
Staging failed
on s3. Error:
Volume
data-teste does
not exist<br>
[2017-02-15
13:34:36.796896]
E [MSGID:
106153]
[glusterd-syncop.c:113:gd_coll<wbr>ate_errors]
0-glusterd:
Staging failed
on s2. Error:
Volume
data-teste does
not exist<br>
<br>
</div>
<div>Many thanks!<br>
</div>
<div> D<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
<fieldset
class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759mimeAttachmentHeader"></fieldset>
<br>
</div>
</div>
<pre>______________________________<wbr>_________________
Gluster-users mailing list
<a moz-do-not-send="true" class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></pre>
</blockquote>
</div>
</blockquote></div>
</div></div></div>
</blockquote>
</div></div></div></blockquote></div>
</div></div></div>
</blockquote>
</body></html>