<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Hi all,</p>
    <p>for the upgrade I followed this procedure:</p>
    <ul>
      <li>put node in maintenance mode (ensure no client are active)</li>
      <li>yum versionlock delete glusterfs*<br>
      </li>
      <li>service glusterd stop</li>
      <li>yum update</li>
      <li>systemctl daemon-reload <br>
      </li>
      <li>service glusterd start</li>
      <li>yum versionlock add glusterfs*</li>
      <li>gluster volume heal vm-images-repo full</li>
      <li>gluster volume heal vm-images-repo info</li>
    </ul>
    <p>on each server every time I ran 'gluster --version' to confirm
      the upgrade, at the end I ran 'gluster volume set all
      cluster.op-version 30800'.</p>
    <p>Today I've tried to manually kill a brick process on a non
      critical volume, after that into the log I see:</p>
    <p>[2017-06-29 07:03:50.074388] I [MSGID: 100030]
      [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfsd: Started running
      /usr/sbin/glusterfsd version 3.8.12 (args: /usr/sbin/glusterfsd -s
      virtnode-0-1-gluster --volfile-id
iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
      -p
/var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-gluster-data-glusterfs-brick1b-iso-images-repo.pid
      -S /var/run/gluster/c779852c21e2a91eaabbdda3b9127262.socket
      --brick-name /data/glusterfs/brick1b/iso-images-repo -l
      /var/log/glusterfs/bricks/data-glusterfs-brick1b-iso-images-repo.log
      --xlator-option
      *-posix.glusterd-uuid=e93ebee7-5d95-4100-a9df-4a3e60134b73
      --brick-port 49163 --xlator-option
      iso-images-repo-server.listen-port=49163)</p>
    <p>I've checked after the restart and indeed now the directory
      'entry-changes' is created, but why stopping the glusterd service
      has not stopped also the brick processes?</p>
    <p>Now how can I recover from this issue? Restarting all brick
      processes is enough?</p>
    <p><br>
    </p>
    <p>Greetings,</p>
    <p>    Paolo Margara<br>
    </p>
    <br>
    <div class="moz-cite-prefix">Il 28/06/2017 18:41, Pranith Kumar
      Karampuri ha scritto:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAOgeEnafATLq2tgS65=vqbyNL_JV9YdMs3pcaATwH=VCikrsAg@mail.gmail.com">
      <div dir="ltr"><br>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Wed, Jun 28, 2017 at 9:45 PM,
            Ravishankar N <span dir="ltr">&lt;<a
                href="mailto:ravishankar@redhat.com" target="_blank"
                moz-do-not-send="true">ravishankar@redhat.com</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex">
              <div class="HOEnZb">
                <div class="h5">On 06/28/2017 06:52 PM, Paolo Margara
                  wrote:<br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex">
                    Hi list,<br>
                    <br>
                    yesterday I noted the following lines into the
                    glustershd.log log file:<br>
                    <br>
                    [2017-06-28 11:53:05.000890] W [MSGID: 108034]<br>
                    [afr-self-heald.c:479:afr_shd_<wbr>index_sweep]<br>
                    0-iso-images-repo-replicate-0: unable to get
                    index-dir on<br>
                    iso-images-repo-client-0<br>
                    [2017-06-28 11:53:05.001146] W [MSGID: 108034]<br>
                    [afr-self-heald.c:479:afr_shd_<wbr>index_sweep]
                    0-vm-images-repo-replicate-0:<br>
                    unable to get index-dir on vm-images-repo-client-0<br>
                    [2017-06-28 11:53:06.001141] W [MSGID: 108034]<br>
                    [afr-self-heald.c:479:afr_shd_<wbr>index_sweep]
                    0-hosted-engine-replicate-0:<br>
                    unable to get index-dir on hosted-engine-client-0<br>
                    [2017-06-28 11:53:08.001094] W [MSGID: 108034]<br>
                    [afr-self-heald.c:479:afr_shd_<wbr>index_sweep]
                    0-vm-images-repo-replicate-2:<br>
                    unable to get index-dir on vm-images-repo-client-6<br>
                    [2017-06-28 11:53:08.001170] W [MSGID: 108034]<br>
                    [afr-self-heald.c:479:afr_shd_<wbr>index_sweep]
                    0-vm-images-repo-replicate-1:<br>
                    unable to get index-dir on vm-images-repo-client-3<br>
                    <br>
                    Digging into the mailing list archive I've found
                    another user with a<br>
                    similar issue (the thread was '[Gluster-users]
                    glustershd: unable to get<br>
                    index-dir on myvolume-client-0'), the solution
                    suggested was to verify<br>
                    if the  /&lt;path-to-backend-brick&gt;/.glus<wbr>terfs/indices
                    directory contains<br>
                    all these sub directories: 'dirty', 'entry-changes'
                    and 'xattrop' and if<br>
                    some of them does not exists simply create it with
                    mkdir.<br>
                    <br>
                    In my case the 'entry-changes' directory is not
                    present on all the<br>
                    bricks and on all the servers:<br>
                    <br>
                    /data/glusterfs/brick1a/hosted<wbr>-engine/.glusterfs/indices/:<br>
                    total 0<br>
                    drw------- 2 root root 55 Jun 28 15:02 dirty<br>
                    drw------- 2 root root 57 Jun 28 15:02 xattrop<br>
                    <br>
                    /data/glusterfs/brick1b/iso-im<wbr>ages-repo/.glusterfs/indices/:<br>
                    total 0<br>
                    drw------- 2 root root 55 May 29 14:04 dirty<br>
                    drw------- 2 root root 57 May 29 14:04 xattrop<br>
                    <br>
                    /data/glusterfs/brick2/vm-imag<wbr>es-repo/.glusterfs/indices/:<br>
                    total 0<br>
                    drw------- 2 root root 112 Jun 28 15:02 dirty<br>
                    drw------- 2 root root  66 Jun 28 15:02 xattrop<br>
                    <br>
                    /data/glusterfs/brick3/vm-imag<wbr>es-repo/.glusterfs/indices/:<br>
                    total 0<br>
                    drw------- 2 root root 64 Jun 28 15:02 dirty<br>
                    drw------- 2 root root 66 Jun 28 15:02 xattrop<br>
                    <br>
                    /data/glusterfs/brick4/vm-imag<wbr>es-repo/.glusterfs/indices/:<br>
                    total 0<br>
                    drw------- 2 root root 112 Jun 28 15:02 dirty<br>
                    drw------- 2 root root  66 Jun 28 15:02 xattrop<br>
                    <br>
                    I've recently upgraded gluster from 3.7.16 to 3.8.12
                    with the rolling<br>
                    upgrade procedure and I haven't noted this issue
                    prior of the update, on<br>
                    another system upgraded with the same procedure I
                    haven't encountered<br>
                    this problem.<br>
                    <br>
                    Currently all VM images appear to be OK but prior to
                    create the<br>
                    'entry-changes' I would like to ask if this is still
                    the correct<br>
                    procedure to fix this issue<br>
                  </blockquote>
                  <br>
                </div>
              </div>
              Did you restart the bricks after the upgrade? That should
              have created the entry-changes directory. Can you kill the
              brick and restart it and see if the dir is created? Double
              check from the brick logs that you're indeed running
              3.12:  "Started running /usr/local/sbin/glusterfsd version
              3.8.12" should appear when the brick starts.<br>
            </blockquote>
            <div><br>
            </div>
            <div>Please note that if you are going the route of killing
              and restarting, you need to do it in the same way you did
              rolling upgrade. You need to wait for heal to complete
              before you kill the other nodes. But before you do this,
              it is better you look at the logs or confirm the steps you
              used for doing upgrade.<br>
            </div>
            <div> </div>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex">
              <br>
              -Ravi
              <div class="HOEnZb">
                <div class="h5"><br>
                  <br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex">
                      and if this problem could have affected the<br>
                    heal operations occurred meanwhile.<br>
                    <br>
                    Thanks.<br>
                    <br>
                    <br>
                    Greetings,<br>
                    <br>
                         Paolo Margara<br>
                    <br>
                    ______________________________<wbr>_________________<br>
                    Gluster-users mailing list<br>
                    <a href="mailto:Gluster-users@gluster.org"
                      target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
                    <a
                      href="http://lists.gluster.org/mailman/listinfo/gluster-users"
                      rel="noreferrer" target="_blank"
                      moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
                  </blockquote>
                  <br>
                  <br>
                  ______________________________<wbr>_________________<br>
                  Gluster-users mailing list<br>
                  <a href="mailto:Gluster-users@gluster.org"
                    target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
                  <a
                    href="http://lists.gluster.org/mailman/listinfo/gluster-users"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
                </div>
              </div>
            </blockquote>
          </div>
          <br>
          <br clear="all">
          <br>
          -- <br>
          <div class="gmail_signature" data-smartmail="gmail_signature">
            <div dir="ltr">Pranith<br>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
LABINF - HPC@POLITO
DAUIN - Politecnico di Torino
Corso Castelfidardo, 34D - 10129 Torino (TO)
phone: +39 011 090 7051
site: <a class="moz-txt-link-freetext" href="http://www.labinf.polito.it/">http://www.labinf.polito.it/</a>
site: <a class="moz-txt-link-freetext" href="http://hpc.polito.it/">http://hpc.polito.it/</a></pre>
  </body>
</html>