<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Hello Karthik,<br>
      <br>
      thank you very much. That was exactly the problem.<br>
      Running the command (cat
      &lt;mount-path&gt;/.meta/graphs/active/&lt;vol-name&gt;-client-*/private
      | egrep -i 'connected') on the clients revealed that a few were
      not connected to all bricks.<br>
      After restarting them, everything went back to normal.<br>
      <br>
      Regards,<br>
      Ulrich<br>
    </p>
    <div class="moz-cite-prefix">Am 06.02.20 um 12:51 schrieb Karthik
      Subrahmanya:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAHRDaUF8ezd-SWyA52=9JGdP9fn5p+oq2bW62PS3ZYvXqvH_sQ@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">Hi Ulrich,
        <div><br>
        </div>
        <div>From the problem statement, seems like the client(s) have
          lost connection with brick. Can you give the following
          information?</div>
        <div>- How many clients are there for this volume and which
          version they are in?</div>
        <div>- gluster volume info &lt;vol-name&gt; &amp; gluster volume
          status &lt;vol-name&gt; outputs</div>
        <div>- Check whether all the clients are connected to all the
          bricks.</div>
        <div>If you are using the fuse clients give the output of the
          following from all the clients</div>
        cat
        &lt;mount-path&gt;/.meta/graphs/active/&lt;vol-name&gt;-client-*/private
        | egrep -i 'connected'
        <div>-If you are using non fuse clients generate the statedumps <span
style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal"><font
              color="#000000"><font face="Liberation Serif, serif">(</font></font></span><a
href="https://docs.gluster.org/en/latest/Troubleshooting/statedump/"
            style="background-color:transparent;color:rgb(0,0,128)"
            moz-do-not-send="true">https://docs.gluster.org/en/latest/Troubleshooting/statedump/</a><span
style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal"><font
              color="#000000"><font face="Liberation Serif, serif">)</font></font></span> of
          each clients and give the output of</div>
        grep -A 2 "xlator.protocol.client"
        /var/run/gluster/&lt;dump-file&gt;
        <div>(If you have changed the statedump-path replace the path in
          the above command)<br>
          <div>
            <div><br>
            </div>
            <div>Regards,</div>
            <div>Karthik</div>
          </div>
        </div>
      </div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">On Thu, Feb 6, 2020 at 5:06 PM
          Ulrich Pötter &lt;<a
            href="mailto:ulrich.poetter@menzel-it.net"
            moz-do-not-send="true">ulrich.poetter@menzel-it.net</a>&gt;
          wrote:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px
          0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear
          Gluster Users,<br>
          <br>
          we are running the following Gluster setup:<br>
          Replica 3 on 3 servers. Two are CentOs 7.6 with Gluster 6.5
          and one was <br>
          upgraded to Centos 7.7 with Gluster 6.7.<br>
          <br>
          Since the upgrade to gluster 6.7 on one of the servers, we
          encountered <br>
          the following issue:<br>
          New healing entries appear and get healed, but soon afterwards
          new <br>
          healing entries appear.<br>
          The abovementioned problem started after we upgraded the
          server.<br>
          The healing issues do not only appear on the upgraded server,
          but on all <br>
          three.<br>
          <br>
          This does not seem to be a split brain issue as the output of
          the <br>
          command "gluster volume head &lt;vol&gt; info split-brain" is
          "number of <br>
          entries in split-brain: 0"<br>
          <br>
          Has anyone else observed such behavior with different Gluster
          versions <br>
          in one replica setup?<br>
          <br>
          We hesitate with updating the other nodes, as we do not know
          if this <br>
          standard Gluster behaviour or if there is more to this
          problem.<br>
          <br>
          Can you help us?<br>
          <br>
          Thanks in advance,<br>
          Ulrich<br>
          <br>
          ________<br>
          <br>
          Community Meeting Calendar:<br>
          <br>
          APAC Schedule -<br>
          Every 2nd and 4th Tuesday at 11:30 AM IST<br>
          Bridge: <a href="https://bluejeans.com/441850968"
            rel="noreferrer" target="_blank" moz-do-not-send="true">https://bluejeans.com/441850968</a><br>
          <br>
          NA/EMEA Schedule -<br>
          Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
          Bridge: <a href="https://bluejeans.com/441850968"
            rel="noreferrer" target="_blank" moz-do-not-send="true">https://bluejeans.com/441850968</a><br>
          <br>
          Gluster-users mailing list<br>
          <a href="mailto:Gluster-users@gluster.org" target="_blank"
            moz-do-not-send="true">Gluster-users@gluster.org</a><br>
          <a
            href="https://lists.gluster.org/mailman/listinfo/gluster-users"
            rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
          <br>
        </blockquote>
      </div>
    </blockquote>
    <div class="moz-signature"><br>
    </div>
  </body>
</html>