<div dir="ltr">Hi Avra,<br><div><div class="gmail_extra"><br><div class="gmail_quote">On 21 February 2017 at 03:22, Avra Sengupta <span dir="ltr">&lt;<a href="mailto:asengupt@redhat.com" target="_blank">asengupt@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF">
    <div class="gmail-m_-7177417682601481536m_-3709776080829440996moz-cite-prefix">Hi D,<br>
      <br>
      We tried reproducing the issue with a similar setup but were
      unable to do so. We are still investigating it.<br>
      <br>
      I have another follow-up question. You said that the repo exists
      only in s0? If that was the case, then bringing glusterd down on
      s0 only, deleteing the repo and starting glusterd once again would
      have removed it. The fact that the repo is restored as soon as
      glusterd restarts on s0, means that some other node(s) in the
      cluster also has that repo and is passing that information to the
      glusterd in s0 during handshake. Could you please confirm if any
      other node apart from s0 has the particular
      repo(/var/lib/glusterd/vols/da<wbr>ta-teste) or not. Thanks.<br></div></div></blockquote><div><br></div><div>I&#39;ll point out that this isn&#39;t a recurring issue. It&#39;s the first time this has happened, and it&#39;s not happened since. If it wasn&#39;t for the orphaned volume, I wouldn&#39;t even have requested support.<br><br>Huh, so, I&#39;ve just rescanned all of the nodes, and the volume is now appearing on all. That&#39;s very odd, as the volume was &quot;created&quot; on Weds 15th &amp; until the end of the 17th it was still only appearing on s0 (both in the volume list &amp; in the vols directory).<br></div><div>Grepping the etc-glusterfs-glusterd.vol logs, the first mention of the volume after the failures I posted previously is the following...<br><br><br>[2017-02-17 15:46:17.199193] W [rpcsvc.c:265:rpcsvc_program_actor] 0-rpc-service: RPC program not available (req 1298437 330) for <a href="http://10.123.123.102:49008">10.123.123.102:49008</a><br>[2017-02-17 15:46:17.199216] E [rpcsvc.c:560:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully<br>[2017-02-17 22:20:58.525036] I [MSGID: 106004] [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer &lt;s3&gt; (&lt;978c228a-86f8-48dc-89c1-c63914eaa9a4&gt;), in state &lt;Peer in Cluster&gt;, has<br> disconnected from glusterd.<br>[2017-02-17 22:20:58.525128] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (--&gt;/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x1deac) [0x7f2a85517eac] --&gt;/usr/lib64/glusterfs/3.8.8/xlator/<br>mgmt/glusterd.so(+0x27a58) [0x7f2a85521a58] --&gt;/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0xd09da) [0x7f2a855ca9da] ) 0-management: Lock for vol data not held<br>[2017-02-17 22:20:58.525144] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for data<br>[2017-02-17 22:20:58.525171] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (--&gt;/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x1deac) [0x7f2a85517eac] --&gt;/usr/lib64/glusterfs/3.8.8/xlator/<br>mgmt/glusterd.so(+0x27a58) [0x7f2a85521a58] --&gt;/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0xd09da) [0x7f2a855ca9da] ) 0-management: Lock for vol data-novo not held<br>[2017-02-17 22:20:58.525182] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for data-novo<br>[2017-02-17 22:20:58.525205] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (--&gt;/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x1deac) [0x7f2a85517eac] --&gt;/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x27a58) [0x7f2a85521a58] --&gt;/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0xd09da) [0x7f2a855ca9da] ) 0-management: Lock for vol data-teste not held<br>[2017-02-17 22:20:58.525235] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for data-teste<br>[2017-02-17 22:20:58.525261] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (--&gt;/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x1deac) [0x7f2a85517eac] --&gt;/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0x27a58) [0x7f2a85521a58] --&gt;/usr/lib64/glusterfs/3.8.8/xlator/mgmt/glusterd.so(+0xd09da) [0x7f2a855ca9da] ) 0-management: Lock for vol data-teste2 not held<br>[2017-02-17 22:20:58.525272] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for data-teste2<br><br><br></div><div>That&#39;s 58 hours between the volume&#39;s failed creation &amp; its first sign of life...??<br></div><div><br></div><div>At the time when it was only appearing on s0, I tried stopping glusterd on multiple occasions &amp; deleting the volume&#39;s directory within vols, but it always returned as soon as I restarted glusterd.<br></div><div>I did this with the help of Joe on IRC at the time, and he was also stumped (he suggested that the data was possibly still being held in memory somewhere), so I&#39;m quite sure this wasn&#39;t simply an oversight on my part.<br><br></div><div>Anyway, many thanks for the help, and I&#39;d be happy to provide any logs if desired, however whilst knowing what happened &amp; why might be useful, all now seems to have resolved itself.<br></div><div><br></div><div>Cheers,<br></div><div> Doug<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF"><div class="gmail-m_-7177417682601481536m_-3709776080829440996moz-cite-prefix">
      <br>
      Regards,<br>
      Avra<div><div class="gmail-m_-7177417682601481536h5"><br>
      <br>
      On 02/20/2017 06:51 PM, Gambit15 wrote:<br>
    </div></div></div><div><div class="gmail-m_-7177417682601481536h5">
    <blockquote type="cite">
      <div dir="ltr">Hi Avra,<br>
        <div>
          <div class="gmail_extra"><br>
            <div class="gmail_quote">On 20 February 2017 at 02:51, Avra
              Sengupta <span dir="ltr">&lt;<a href="mailto:asengupt@redhat.com" target="_blank">asengupt@redhat.com</a>&gt;</span>
              wrote:<br>
              <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                <div bgcolor="#FFFFFF">
                  <div class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix">Hi
                    D,<br>
                    <br>
                    It seems you tried to take a clone of a snapshot,
                    when that snapshot was not activated.<br>
                  </div>
                </div>
              </blockquote>
              <div><br>
              </div>
              <div>Correct. As per my commands, I then noticed the
                issue, checked the snapshot&#39;s status &amp; activated it.
                I included this in my command history just to clear up
                any doubts from the logs.<br>
                <br>
              </div>
              <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                <div bgcolor="#FFFFFF">
                  <div class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix">
                    However in this scenario, the cloned volume should
                    not be in an inconsistent state. I will try to
                    reproduce this and see if it&#39;s a bug. Meanwhile
                    could you please answer the following queries:<br>
                    1. How many nodes were in the cluster.<br>
                  </div>
                </div>
              </blockquote>
              <div><br>
                There are 4 nodes in a (2+1)x2 setup.<br>
              </div>
              <div>s0 replicates to s1, with an arbiter on s2, and s2
                replicates to s3, with an arbiter on s0.<br>
              </div>
              <div><br>
              </div>
              <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                <div bgcolor="#FFFFFF">
                  <div class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix"> 2.
                    How many bricks does the snapshot
                    data-bck_GMT-2017.02.09-14.15.<wbr>43 have?<br>
                  </div>
                </div>
              </blockquote>
              <div> </div>
              <div>6 bricks, including the 2 arbiters.<br>
                 <br>
              </div>
              <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                <div bgcolor="#FFFFFF">
                  <div class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix"> 3.
                    Was the snapshot clone command issued from a node
                    which did not have any bricks for the snapshot
                    data-bck_GMT-2017.02.09-14.15.<wbr>43<br>
                  </div>
                </div>
              </blockquote>
              <div><br>
              </div>
              <div>All commands were issued from s0. All volumes have
                bricks on every node in the cluster.<br>
              </div>
              <div> </div>
              <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                <div bgcolor="#FFFFFF">
                  <div class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix"> 4.
                    I see you tried to delete the new cloned volume. Did
                    the new cloned volume land in this state after
                    failure to create the clone or failure to delete the
                    clone<br>
                  </div>
                </div>
              </blockquote>
              <div><br>
              </div>
              <div>I noticed there was something wrong as soon as I
                created the clone. The clone command completed, however
                I was then unable to do anything with it because the
                clone didn&#39;t exist on s1-s3.<br>
              </div>
              <div> </div>
              <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                <div bgcolor="#FFFFFF">
                  <div class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix"> <br>
                    If you want to remove the half baked volume from the
                    cluster please proceed with the following steps.<br>
                    1. bring down glusterd on all nodes by running the
                    following command on all nodes<br>
                    $ systemctl stop glusterd.<br>
                    Verify that the glusterd is down on all nodes by
                    running the following command on all nodes<br>
                    $ systemctl status glusterd.<br>
                    2. delete the following repo from all the nodes
                    (whichever nodes it exists)<br>
                    /var/lib/glusterd/vols/data-te<wbr>ste<br>
                  </div>
                </div>
              </blockquote>
              <div><br>
              </div>
              <div>The repo only exists on s0, but stoppping glusterd on
                only s0 &amp; deleting the directory didn&#39;t work, the
                directory was restored as soon as glusterd was
                restarted. I haven&#39;t yet tried stopping glusterd on
                *all* nodes before doing this, although I&#39;ll need to
                plan for that, as it&#39;ll take the entire cluster off the
                air.<br>
                <br>
              </div>
              <div>Thanks for the reply,<br>
              </div>
              <div> Doug<br>
              </div>
              <div><br>
              </div>
              <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                <div bgcolor="#FFFFFF">
                  <div class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-cite-prefix"> <br>
                    Regards,<br>
                    Avra
                    <div>
                      <div class="gmail-m_-7177417682601481536m_-3709776080829440996h5"><br>
                        <br>
                        On 02/16/2017 08:01 PM, Gambit15 wrote:<br>
                      </div>
                    </div>
                  </div>
                  <blockquote type="cite">
                    <div>
                      <div class="gmail-m_-7177417682601481536m_-3709776080829440996h5">
                        <div dir="ltr">
                          <div>
                            <div>
                              <div>Hey guys,<br>
                              </div>
                               I tried to create a new volume from a
                              cloned snapshot yesterday, however
                              something went wrong during the process
                              &amp; I&#39;m now stuck with the new volume
                              being created on the server I ran the
                              commands on (s0), but not on the rest of
                              the peers. I&#39;m unable to delete this new
                              volume from the server, as it doesn&#39;t
                              exist on the peers.<br>
                              <br>
                            </div>
                            What do I do?<br>
                          </div>
                          Any insights into what may have gone wrong?<br>
                          <div>
                            <div>
                              <div>
                                <div>
                                  <div>
                                    <div><br>
                                      CentOS 7.3.1611</div>
                                    <div>Gluster 3.8.8<br>
                                      <br>
                                    </div>
                                    <div>The command history &amp;
                                      extract from
                                      etc-glusterfs-glusterd.vol.log are
                                      included below.<br>
                                    </div>
                                    <div><br>
                                      gluster volume list<br>
                                      gluster snapshot list<br>
                                      gluster snapshot clone data-teste
                                      data-bck_GMT-2017.02.09-14.15.<wbr>43<br>
                                      gluster volume status data-teste<br>
                                      gluster volume delete data-teste<br>
                                      gluster snapshot create teste data<br>
                                      gluster snapshot clone data-teste
                                      teste_GMT-2017.02.15-12.44.04<br>
                                      gluster snapshot status<br>
                                      gluster snapshot activate
                                      teste_GMT-2017.02.15-12.44.04<br>
                                      gluster snapshot clone data-teste
                                      teste_GMT-2017.02.15-12.44.04<br>
                                      <br>
                                      <br>
                                      [2017-02-15 12:43:21.667403] I
                                      [MSGID: 106499]
                                      [glusterd-handler.c:4349:__glu<wbr>sterd_handle_status_volume]

                                      0-management: Received status
                                      volume req for volume data-teste<br>
                                      [2017-02-15 12:43:21.682530] E
                                      [MSGID: 106301]
                                      [glusterd-syncop.c:1297:gd_sta<wbr>ge_op_phase]

                                      0-management: Staging of operation
                                      &#39;Volume Status&#39; failed on
                                      localhost : Volume data-teste is
                                      not started<br>
                                      [2017-02-15 12:43:43.633031] I
                                      [MSGID: 106495]
                                      [glusterd-handler.c:3128:__glu<wbr>sterd_handle_getwd]

                                      0-glusterd: Received getwd req<br>
                                      [2017-02-15 12:43:43.640597] I
                                      [run.c:191:runner_log]
                                      (--&gt;/usr/lib64/glusterfs/3.8.8<wbr>/xlator/mgmt/glusterd.so(+0xcc<wbr>4b2)

                                      [0x7ffb396a14b2]
                                      --&gt;/usr/lib64/glusterfs/3.8.8/<wbr>xlator/mgmt/glusterd.so(+0xcbf<wbr>65)

                                      [0x7ffb396a0f65]
                                      --&gt;/lib64/libglusterfs.so.0(ru<wbr>nner_log+0x115)

                                      [0x7ffb44ec31c5] ) 0-management:
                                      Ran script:
                                      /var/lib/glusterd/hooks/1/dele<wbr>te/post/S57glusterfind-delete-<wbr>post

                                      --volname=data-teste<br>
                                      [2017-02-15 13:05:20.103423] E
                                      [MSGID: 106122]
                                      [glusterd-snapshot.c:2397:glus<wbr>terd_snapshot_clone_prevalidat<wbr>e]

                                      0-management: Failed to pre
                                      validate<br>
                                      [2017-02-15 13:05:20.103464] E
                                      [MSGID: 106443]
                                      [glusterd-snapshot.c:2413:glus<wbr>terd_snapshot_clone_prevalidat<wbr>e]

                                      0-management: One or more bricks
                                      are not running. Please run
                                      snapshot status command to see
                                      brick status.<br>
                                      Please start the stopped brick and
                                      then issue snapshot clone command<br>
                                      [2017-02-15 13:05:20.103481] W
                                      [MSGID: 106443]
                                      [glusterd-snapshot.c:8563:glus<wbr>terd_snapshot_prevalidate]

                                      0-management: Snapshot clone
                                      pre-validation failed<br>
                                      [2017-02-15 13:05:20.103492] W
                                      [MSGID: 106122]
                                      [glusterd-mgmt.c:167:gd_mgmt_v<wbr>3_pre_validate_fn]

                                      0-management: Snapshot Prevalidate
                                      Failed<br>
                                      [2017-02-15 13:05:20.103503] E
                                      [MSGID: 106122]
                                      [glusterd-mgmt.c:884:glusterd_<wbr>mgmt_v3_pre_validate]

                                      0-management: Pre Validation
                                      failed for operation Snapshot on
                                      local node<br>
                                      [2017-02-15 13:05:20.103514] E
                                      [MSGID: 106122]
                                      [glusterd-mgmt.c:2243:glusterd<wbr>_mgmt_v3_initiate_snap_phases]

                                      0-management: Pre Validation
                                      Failed<br>
                                      [2017-02-15 13:05:20.103531] E
                                      [MSGID: 106027]
                                      [glusterd-snapshot.c:8118:glus<wbr>terd_snapshot_clone_postvalida<wbr>te]

                                      0-management: unable to find clone
                                      data-teste volinfo<br>
                                      [2017-02-15 13:05:20.103542] W
                                      [MSGID: 106444]
                                      [glusterd-snapshot.c:9063:glus<wbr>terd_snapshot_postvalidate]

                                      0-management: Snapshot create
                                      post-validation failed<br>
                                      [2017-02-15 13:05:20.103561] W
                                      [MSGID: 106121]
                                      [glusterd-mgmt.c:351:gd_mgmt_v<wbr>3_post_validate_fn]

                                      0-management: postvalidate
                                      operation failed<br>
                                      [2017-02-15 13:05:20.103572] E
                                      [MSGID: 106121]
                                      [glusterd-mgmt.c:1660:glusterd<wbr>_mgmt_v3_post_validate]

                                      0-management: Post Validation
                                      failed for operation Snapshot on
                                      local node<br>
                                      [2017-02-15 13:05:20.103582] E
                                      [MSGID: 106122]
                                      [glusterd-mgmt.c:2363:glusterd<wbr>_mgmt_v3_initiate_snap_phases]

                                      0-management: Post Validation
                                      Failed<br>
                                      [2017-02-15 13:11:15.862858] W
                                      [MSGID: 106057]
                                      [glusterd-snapshot-utils.c:410<wbr>:glusterd_snap_volinfo_find]

                                      0-management: Snap volume
                                      c3ceae3889484e96ab8bed69593cf6<wbr>d3.s0.run-gluster-snaps-c3ceae<wbr>3889484e96ab8bed69593cf6d3-bri<wbr>ck1-data-brick

                                      not found [Argumento inválido]<br>
                                      [2017-02-15 13:11:16.314759] I
                                      [MSGID: 106143]
                                      [glusterd-pmap.c:250:pmap_regi<wbr>stry_bind]
                                      0-pmap: adding brick
                                      /run/gluster/snaps/c3ceae38894<wbr>84e96ab8bed69593cf6d3/brick1/d<wbr>ata/brick

                                      on port 49452<br>
                                      [2017-02-15 13:11:16.316090] I
                                      [rpc-clnt.c:1046:rpc_clnt_conn<wbr>ection_init]

                                      0-management: setting
                                      frame-timeout to 600<br>
                                      [2017-02-15 13:11:16.348867] W
                                      [MSGID: 106057]
                                      [glusterd-snapshot-utils.c:410<wbr>:glusterd_snap_volinfo_find]

                                      0-management: Snap volume
                                      c3ceae3889484e96ab8bed69593cf6<wbr>d3.s0.run-gluster-snaps-c3ceae<wbr>3889484e96ab8bed69593cf6d3-bri<wbr>ck6-data-arbiter

                                      not found [Argumento inválido]<br>
                                      [2017-02-15 13:11:16.558878] I
                                      [MSGID: 106143]
                                      [glusterd-pmap.c:250:pmap_regi<wbr>stry_bind]
                                      0-pmap: adding brick
                                      /run/gluster/snaps/c3ceae38894<wbr>84e96ab8bed69593cf6d3/brick6/d<wbr>ata/arbiter

                                      on port 49453<br>
                                      [2017-02-15 13:11:16.559883] I
                                      [rpc-clnt.c:1046:rpc_clnt_conn<wbr>ection_init]

                                      0-management: setting
                                      frame-timeout to 600<br>
                                      [2017-02-15 13:11:23.279721] E
                                      [MSGID: 106030]
                                      [glusterd-snapshot.c:4736:glus<wbr>terd_take_lvm_snapshot]

                                      0-management: taking snapshot of
                                      the brick (/run/gluster/snaps/c3ceae3889<wbr>484e96ab8bed69593cf6d3/brick1/<wbr>data/brick)

                                      of device
                                      /dev/mapper/v0.dc0.cte--g0-c3c<wbr>eae3889484e96ab8bed69593cf6d3_<wbr>0

                                      failed<br>
                                      [2017-02-15 13:11:23.279790] E
                                      [MSGID: 106030]
                                      [glusterd-snapshot.c:5135:glus<wbr>terd_take_brick_snapshot]

                                      0-management: Failed to take
                                      snapshot of brick
                                      s0:/run/gluster/snaps/c3ceae38<wbr>89484e96ab8bed69593cf6d3/brick<wbr>1/data/brick<br>
                                      [2017-02-15 13:11:23.279806] E
                                      [MSGID: 106030]
                                      [glusterd-snapshot.c:6484:glus<wbr>terd_take_brick_snapshot_task]

                                      0-management: Failed to take
                                      backend snapshot for brick
                                      s0:/run/gluster/snaps/data-tes<wbr>te/brick1/data/brick

                                      volume(data-teste)<br>
                                      [2017-02-15 13:11:23.286678] E
                                      [MSGID: 106030]
                                      [glusterd-snapshot.c:4736:glus<wbr>terd_take_lvm_snapshot]

                                      0-management: taking snapshot of
                                      the brick (/run/gluster/snaps/c3ceae3889<wbr>484e96ab8bed69593cf6d3/brick6/<wbr>data/arbiter)

                                      of device
                                      /dev/mapper/v0.dc0.cte--g0-c3c<wbr>eae3889484e96ab8bed69593cf6d3_<wbr>1

                                      failed<br>
                                      [2017-02-15 13:11:23.286735] E
                                      [MSGID: 106030]
                                      [glusterd-snapshot.c:5135:glus<wbr>terd_take_brick_snapshot]

                                      0-management: Failed to take
                                      snapshot of brick
                                      s0:/run/gluster/snaps/c3ceae38<wbr>89484e96ab8bed69593cf6d3/brick<wbr>6/data/arbiter<br>
                                      [2017-02-15 13:11:23.286749] E
                                      [MSGID: 106030]
                                      [glusterd-snapshot.c:6484:glus<wbr>terd_take_brick_snapshot_task]

                                      0-management: Failed to take
                                      backend snapshot for brick
                                      s0:/run/gluster/snaps/data-tes<wbr>te/brick6/data/arbiter

                                      volume(data-teste)<br>
                                      [2017-02-15 13:11:23.286793] E
                                      [MSGID: 106030]
                                      [glusterd-snapshot.c:6626:glus<wbr>terd_schedule_brick_snapshot]

                                      0-management: Failed to create
                                      snapshot<br>
                                      [2017-02-15 13:11:23.286813] E
                                      [MSGID: 106441]
                                      [glusterd-snapshot.c:6796:glus<wbr>terd_snapshot_clone_commit]

                                      0-management: Failed to take
                                      backend snapshot data-teste<br>
                                      [2017-02-15 13:11:25.530666] E
                                      [MSGID: 106442]
                                      [glusterd-snapshot.c:8308:glus<wbr>terd_snapshot]

                                      0-management: Failed to clone
                                      snapshot<br>
                                      [2017-02-15 13:11:25.530721] W
                                      [MSGID: 106123]
                                      [glusterd-mgmt.c:272:gd_mgmt_v<wbr>3_commit_fn]

                                      0-management: Snapshot Commit
                                      Failed<br>
                                      [2017-02-15 13:11:25.530735] E
                                      [MSGID: 106123]
                                      [glusterd-mgmt.c:1427:glusterd<wbr>_mgmt_v3_commit]

                                      0-management: Commit failed for
                                      operation Snapshot on local node<br>
                                      [2017-02-15 13:11:25.530749] E
                                      [MSGID: 106123]
                                      [glusterd-mgmt.c:2304:glusterd<wbr>_mgmt_v3_initiate_snap_phases]

                                      0-management: Commit Op Failed<br>
                                      [2017-02-15 13:11:25.532312] E
                                      [MSGID: 106027]
                                      [glusterd-snapshot.c:8118:glus<wbr>terd_snapshot_clone_postvalida<wbr>te]

                                      0-management: unable to find clone
                                      data-teste volinfo<br>
                                      [2017-02-15 13:11:25.532339] W
                                      [MSGID: 106444]
                                      [glusterd-snapshot.c:9063:glus<wbr>terd_snapshot_postvalidate]

                                      0-management: Snapshot create
                                      post-validation failed<br>
                                      [2017-02-15 13:11:25.532353] W
                                      [MSGID: 106121]
                                      [glusterd-mgmt.c:351:gd_mgmt_v<wbr>3_post_validate_fn]

                                      0-management: postvalidate
                                      operation failed<br>
                                      [2017-02-15 13:11:25.532367] E
                                      [MSGID: 106121]
                                      [glusterd-mgmt.c:1660:glusterd<wbr>_mgmt_v3_post_validate]

                                      0-management: Post Validation
                                      failed for operation Snapshot on
                                      local node<br>
                                      [2017-02-15 13:11:25.532381] E
                                      [MSGID: 106122]
                                      [glusterd-mgmt.c:2363:glusterd<wbr>_mgmt_v3_initiate_snap_phases]

                                      0-management: Post Validation
                                      Failed<br>
                                      [2017-02-15 13:29:53.779020] E
                                      [MSGID: 106062]
                                      [glusterd-snapshot-utils.c:239<wbr>1:glusterd_snap_create_use_rsp<wbr>_dict]

                                      0-management: failed to get snap
                                      UUID<br>
                                      [2017-02-15 13:29:53.779073] E
                                      [MSGID: 106099]
                                      [glusterd-snapshot-utils.c:250<wbr>7:glusterd_snap_use_rsp_dict]

                                      0-glusterd: Unable to use rsp dict<br>
                                      [2017-02-15 13:29:53.779096] E
                                      [MSGID: 106108]
                                      [glusterd-mgmt.c:1305:gd_mgmt_<wbr>v3_commit_cbk_fn]

                                      0-management: Failed to aggregate
                                      response from  node/brick<br>
                                      [2017-02-15 13:29:53.779136] E
                                      [MSGID: 106116]
                                      [glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]

                                      0-management: Commit failed on s3.
                                      Please check log file for details.<br>
                                      [2017-02-15 13:29:54.136196] E
                                      [MSGID: 106116]
                                      [glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]

                                      0-management: Commit failed on s1.
                                      Please check log file for details.<br>
                                      The message &quot;E [MSGID: 106108]
                                      [glusterd-mgmt.c:1305:gd_mgmt_<wbr>v3_commit_cbk_fn]

                                      0-management: Failed to aggregate
                                      response from  node/brick&quot;
                                      repeated 2 times between
                                      [2017-02-15 13:29:53.779096] and
                                      [2017-02-15 13:29:54.535080]<br>
                                      [2017-02-15 13:29:54.535098] E
                                      [MSGID: 106116]
                                      [glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]

                                      0-management: Commit failed on s2.
                                      Please check log file for details.<br>
                                      [2017-02-15 13:29:54.535320] E
                                      [MSGID: 106123]
                                      [glusterd-mgmt.c:1490:glusterd<wbr>_mgmt_v3_commit]

                                      0-management: Commit failed on
                                      peers<br>
                                      [2017-02-15 13:29:54.535370] E
                                      [MSGID: 106123]
                                      [glusterd-mgmt.c:2304:glusterd<wbr>_mgmt_v3_initiate_snap_phases]

                                      0-management: Commit Op Failed<br>
                                      [2017-02-15 13:29:54.539708] E
                                      [MSGID: 106116]
                                      [glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]

                                      0-management: Post Validation
                                      failed on s1. Please check log
                                      file for details.<br>
                                      [2017-02-15 13:29:54.539797] E
                                      [MSGID: 106116]
                                      [glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]

                                      0-management: Post Validation
                                      failed on s3. Please check log
                                      file for details.<br>
                                      [2017-02-15 13:29:54.539856] E
                                      [MSGID: 106116]
                                      [glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]

                                      0-management: Post Validation
                                      failed on s2. Please check log
                                      file for details.<br>
                                      [2017-02-15 13:29:54.540224] E
                                      [MSGID: 106121]
                                      [glusterd-mgmt.c:1713:glusterd<wbr>_mgmt_v3_post_validate]

                                      0-management: Post Validation
                                      failed on peers<br>
                                      [2017-02-15 13:29:54.540256] E
                                      [MSGID: 106122]
                                      [glusterd-mgmt.c:2363:glusterd<wbr>_mgmt_v3_initiate_snap_phases]

                                      0-management: Post Validation
                                      Failed<br>
                                      The message &quot;E [MSGID: 106062]
                                      [glusterd-snapshot-utils.c:239<wbr>1:glusterd_snap_create_use_rsp<wbr>_dict]

                                      0-management: failed to get snap
                                      UUID&quot; repeated 2 times between
                                      [2017-02-15 13:29:53.779020] and
                                      [2017-02-15 13:29:54.535075]<br>
                                      The message &quot;E [MSGID: 106099]
                                      [glusterd-snapshot-utils.c:250<wbr>7:glusterd_snap_use_rsp_dict]

                                      0-glusterd: Unable to use rsp
                                      dict&quot; repeated 2 times between
                                      [2017-02-15 13:29:53.779073] and
                                      [2017-02-15 13:29:54.535078]<br>
                                      [2017-02-15 13:31:14.285666] I
                                      [MSGID: 106488]
                                      [glusterd-handler.c:1537:__glu<wbr>sterd_handle_cli_get_volume]

                                      0-management: Received get vol req<br>
                                      [2017-02-15 13:32:17.827422] E
                                      [MSGID: 106027]
                                      [glusterd-handler.c:4670:glust<wbr>erd_get_volume_opts]

                                      0-management: Volume
                                      cluster.locking-scheme does not
                                      exist<br>
                                      [2017-02-15 13:34:02.635762] E
                                      [MSGID: 106116]
                                      [glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]

                                      0-management: Pre Validation
                                      failed on s1. Volume data-teste
                                      does not exist<br>
                                      [2017-02-15 13:34:02.635838] E
                                      [MSGID: 106116]
                                      [glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]

                                      0-management: Pre Validation
                                      failed on s2. Volume data-teste
                                      does not exist<br>
                                      [2017-02-15 13:34:02.635889] E
                                      [MSGID: 106116]
                                      [glusterd-mgmt.c:135:gd_mgmt_v<wbr>3_collate_errors]

                                      0-management: Pre Validation
                                      failed on s3. Volume data-teste
                                      does not exist<br>
                                      [2017-02-15 13:34:02.636092] E
                                      [MSGID: 106122]
                                      [glusterd-mgmt.c:947:glusterd_<wbr>mgmt_v3_pre_validate]

                                      0-management: Pre Validation
                                      failed on peers<br>
                                      [2017-02-15 13:34:02.636132] E
                                      [MSGID: 106122]
                                      [glusterd-mgmt.c:2009:glusterd<wbr>_mgmt_v3_initiate_all_phases]

                                      0-management: Pre Validation
                                      Failed<br>
                                      [2017-02-15 13:34:20.313228] E
                                      [MSGID: 106153]
                                      [glusterd-syncop.c:113:gd_coll<wbr>ate_errors]

                                      0-glusterd: Staging failed on s2.
                                      Error: Volume data-teste does not
                                      exist<br>
                                      [2017-02-15 13:34:20.313320] E
                                      [MSGID: 106153]
                                      [glusterd-syncop.c:113:gd_coll<wbr>ate_errors]

                                      0-glusterd: Staging failed on s1.
                                      Error: Volume data-teste does not
                                      exist<br>
                                      [2017-02-15 13:34:20.313377] E
                                      [MSGID: 106153]
                                      [glusterd-syncop.c:113:gd_coll<wbr>ate_errors]

                                      0-glusterd: Staging failed on s3.
                                      Error: Volume data-teste does not
                                      exist<br>
                                      [2017-02-15 13:34:36.796455] E
                                      [MSGID: 106153]
                                      [glusterd-syncop.c:113:gd_coll<wbr>ate_errors]

                                      0-glusterd: Staging failed on s1.
                                      Error: Volume data-teste does not
                                      exist<br>
                                      [2017-02-15 13:34:36.796830] E
                                      [MSGID: 106153]
                                      [glusterd-syncop.c:113:gd_coll<wbr>ate_errors]

                                      0-glusterd: Staging failed on s3.
                                      Error: Volume data-teste does not
                                      exist<br>
                                      [2017-02-15 13:34:36.796896] E
                                      [MSGID: 106153]
                                      [glusterd-syncop.c:113:gd_coll<wbr>ate_errors]

                                      0-glusterd: Staging failed on s2.
                                      Error: Volume data-teste does not
                                      exist<br>
                                      <br>
                                    </div>
                                    <div>Many thanks!<br>
                                    </div>
                                    <div> D<br>
                                    </div>
                                  </div>
                                </div>
                              </div>
                            </div>
                          </div>
                        </div>
                        <br>
                        <fieldset class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759mimeAttachmentHeader"></fieldset>
                        <br>
                      </div>
                    </div>
                    <pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a class="gmail-m_-7177417682601481536m_-3709776080829440996m_-6721563728597169759moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></pre>
    </blockquote>
    

  </div>

</blockquote></div>
</div></div></div>



</blockquote>
</div></div></div></blockquote></div><br></div></div></div>