<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Hi Pranith,</p>
    <p>I'm using this guide
<a class="moz-txt-link-freetext" href="https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md">https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md</a></p>
    <p>Definitely my fault, but I think that is better to specify
      somewhere that restarting the service is not enough simply because
      in many other case, with other services, is sufficient.</p>
    <p>Now I'm restarting every brick process (and waiting for the heal
      to complete), this is fixing my problem.</p>
    <p>Many thanks for the help.<br>
    </p>
    <p><br>
    </p>
    <p>Greetings,</p>
    <p>    Paolo<br>
    </p>
    <br>
    <div class="moz-cite-prefix">Il 29/06/2017 13:03, Pranith Kumar
      Karampuri ha scritto:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAOgeEnZgp4ELNRMa1fmLWL3kCJQBU-uFnR6oxnFz+2xjDRB10Q@mail.gmail.com">
      <div dir="ltr">
        <div>Paolo,<br>
        </div>
              Which document did you follow for the upgrade? We can fix
        the documentation if there are any issues.<br>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Thu, Jun 29, 2017 at 2:07 PM,
          Ravishankar N <span dir="ltr">&lt;<a
              href="mailto:ravishankar@redhat.com" target="_blank"
              moz-do-not-send="true">ravishankar@redhat.com</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000"><span class="">
                <div class="m_3264995662940995335moz-cite-prefix">On
                  06/29/2017 01:08 PM, Paolo Margara wrote:<br>
                </div>
                <blockquote type="cite">
                  <p>Hi all,</p>
                  <p>for the upgrade I followed this procedure:</p>
                  <ul>
                    <li>put node in maintenance mode (ensure no client
                      are active)</li>
                    <li>yum versionlock delete glusterfs*<br>
                    </li>
                    <li>service glusterd stop</li>
                    <li>yum update</li>
                    <li>systemctl daemon-reload <br>
                    </li>
                    <li>service glusterd start</li>
                    <li>yum versionlock add glusterfs*</li>
                    <li>gluster volume heal vm-images-repo full</li>
                    <li>gluster volume heal vm-images-repo info</li>
                  </ul>
                  <p>on each server every time I ran 'gluster --version'
                    to confirm the upgrade, at the end I ran 'gluster
                    volume set all cluster.op-version 30800'.</p>
                  <p>Today I've tried to manually kill a brick process
                    on a non critical volume, after that into the log I
                    see:</p>
                  <p>[2017-06-29 07:03:50.074388] I [MSGID: 100030]
                    [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfsd:
                    Started running /usr/sbin/glusterfsd version 3.8.12
                    (args: /usr/sbin/glusterfsd -s virtnode-0-1-gluster
                    --volfile-id
                    iso-images-repo.virtnode-0-1-<wbr>gluster.data-glusterfs-<wbr>brick1b-iso-images-repo
                    -p
                    /var/lib/glusterd/vols/iso-<wbr>images-repo/run/virtnode-0-1-<wbr>gluster-data-glusterfs-<wbr>brick1b-iso-images-repo.pid
                    -S /var/run/gluster/<wbr>c779852c21e2a91eaabbdda3b91272<wbr>62.socket
                    --brick-name /data/glusterfs/brick1b/iso-<wbr>images-repo
                    -l /var/log/glusterfs/bricks/<wbr>data-glusterfs-brick1b-iso-<wbr>images-repo.log
                    --xlator-option *-posix.glusterd-uuid=<wbr>e93ebee7-5d95-4100-a9df-<wbr>4a3e60134b73
                    --brick-port 49163 --xlator-option
                    iso-images-repo-server.listen-<wbr>port=49163)</p>
                  <p>I've checked after the restart and indeed now the
                    directory 'entry-changes' is created, but why
                    stopping the glusterd service has not stopped also
                    the brick processes?</p>
                </blockquote>
                <br>
              </span> Just stopping,upgrading and restarting glusterd
              does not restart the brick processes, You would need to
              kill all gluster processes on the node before upgrading. 
              After upgrading, when you restart glusterd, it will
              automatically spawn the rest of the gluster processes on
              that node.<span class=""><br>
                 <br>
                <blockquote type="cite">
                  <p>Now how can I recover from this issue? Restarting
                    all brick processes is enough?</p>
                </blockquote>
              </span> Yes, but ensure there are no pending heals like
              Pranith mentioned. <a
                class="m_3264995662940995335moz-txt-link-freetext"
href="https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.7/"
                target="_blank" moz-do-not-send="true">https://gluster.readthedocs.<wbr>io/en/latest/Upgrade-Guide/<wbr>upgrade_to_3.7/</a> 
              lists the steps for upgrade to 3.7 but the steps mentioned
              there are similar for any rolling upgrade.<br>
              <br>
              -Ravi
              <div>
                <div class="h5"><br>
                  <blockquote type="cite">
                    <p><br>
                    </p>
                    <p>Greetings,</p>
                    <p>    Paolo Margara<br>
                    </p>
                    <br>
                    <div class="m_3264995662940995335moz-cite-prefix">Il
                      28/06/2017 18:41, Pranith Kumar Karampuri ha
                      scritto:<br>
                    </div>
                    <blockquote type="cite">
                      <div dir="ltr"><br>
                        <div class="gmail_extra"><br>
                          <div class="gmail_quote">On Wed, Jun 28, 2017
                            at 9:45 PM, Ravishankar N <span dir="ltr">&lt;<a
                                href="mailto:ravishankar@redhat.com"
                                target="_blank" moz-do-not-send="true">ravishankar@redhat.com</a>&gt;</span>
                            wrote:<br>
                            <blockquote class="gmail_quote"
                              style="margin:0 0 0 .8ex;border-left:1px
                              #ccc solid;padding-left:1ex">
                              <div class="m_3264995662940995335HOEnZb">
                                <div class="m_3264995662940995335h5">On
                                  06/28/2017 06:52 PM, Paolo Margara
                                  wrote:<br>
                                  <blockquote class="gmail_quote"
                                    style="margin:0 0 0
                                    .8ex;border-left:1px #ccc
                                    solid;padding-left:1ex"> Hi list,<br>
                                    <br>
                                    yesterday I noted the following
                                    lines into the glustershd.log log
                                    file:<br>
                                    <br>
                                    [2017-06-28 11:53:05.000890] W
                                    [MSGID: 108034]<br>
                                    [afr-self-heald.c:479:afr_shd_<wbr>index_sweep]<br>
                                    0-iso-images-repo-replicate-0:
                                    unable to get index-dir on<br>
                                    iso-images-repo-client-0<br>
                                    [2017-06-28 11:53:05.001146] W
                                    [MSGID: 108034]<br>
                                    [afr-self-heald.c:479:afr_shd_<wbr>index_sweep]
                                    0-vm-images-repo-replicate-0:<br>
                                    unable to get index-dir on
                                    vm-images-repo-client-0<br>
                                    [2017-06-28 11:53:06.001141] W
                                    [MSGID: 108034]<br>
                                    [afr-self-heald.c:479:afr_shd_<wbr>index_sweep]
                                    0-hosted-engine-replicate-0:<br>
                                    unable to get index-dir on
                                    hosted-engine-client-0<br>
                                    [2017-06-28 11:53:08.001094] W
                                    [MSGID: 108034]<br>
                                    [afr-self-heald.c:479:afr_shd_<wbr>index_sweep]
                                    0-vm-images-repo-replicate-2:<br>
                                    unable to get index-dir on
                                    vm-images-repo-client-6<br>
                                    [2017-06-28 11:53:08.001170] W
                                    [MSGID: 108034]<br>
                                    [afr-self-heald.c:479:afr_shd_<wbr>index_sweep]
                                    0-vm-images-repo-replicate-1:<br>
                                    unable to get index-dir on
                                    vm-images-repo-client-3<br>
                                    <br>
                                    Digging into the mailing list
                                    archive I've found another user with
                                    a<br>
                                    similar issue (the thread was
                                    '[Gluster-users] glustershd: unable
                                    to get<br>
                                    index-dir on myvolume-client-0'),
                                    the solution suggested was to verify<br>
                                    if the 
                                    /&lt;path-to-backend-brick&gt;/.glus<wbr>terfs/indices
                                    directory contains<br>
                                    all these sub directories: 'dirty',
                                    'entry-changes' and 'xattrop' and if<br>
                                    some of them does not exists simply
                                    create it with mkdir.<br>
                                    <br>
                                    In my case the 'entry-changes'
                                    directory is not present on all the<br>
                                    bricks and on all the servers:<br>
                                    <br>
                                    /data/glusterfs/brick1a/hosted<wbr>-engine/.glusterfs/indices/:<br>
                                    total 0<br>
                                    drw------- 2 root root 55 Jun 28
                                    15:02 dirty<br>
                                    drw------- 2 root root 57 Jun 28
                                    15:02 xattrop<br>
                                    <br>
                                    /data/glusterfs/brick1b/iso-im<wbr>ages-repo/.glusterfs/indices/:<br>
                                    total 0<br>
                                    drw------- 2 root root 55 May 29
                                    14:04 dirty<br>
                                    drw------- 2 root root 57 May 29
                                    14:04 xattrop<br>
                                    <br>
                                    /data/glusterfs/brick2/vm-imag<wbr>es-repo/.glusterfs/indices/:<br>
                                    total 0<br>
                                    drw------- 2 root root 112 Jun 28
                                    15:02 dirty<br>
                                    drw------- 2 root root  66 Jun 28
                                    15:02 xattrop<br>
                                    <br>
                                    /data/glusterfs/brick3/vm-imag<wbr>es-repo/.glusterfs/indices/:<br>
                                    total 0<br>
                                    drw------- 2 root root 64 Jun 28
                                    15:02 dirty<br>
                                    drw------- 2 root root 66 Jun 28
                                    15:02 xattrop<br>
                                    <br>
                                    /data/glusterfs/brick4/vm-imag<wbr>es-repo/.glusterfs/indices/:<br>
                                    total 0<br>
                                    drw------- 2 root root 112 Jun 28
                                    15:02 dirty<br>
                                    drw------- 2 root root  66 Jun 28
                                    15:02 xattrop<br>
                                    <br>
                                    I've recently upgraded gluster from
                                    3.7.16 to 3.8.12 with the rolling<br>
                                    upgrade procedure and I haven't
                                    noted this issue prior of the
                                    update, on<br>
                                    another system upgraded with the
                                    same procedure I haven't encountered<br>
                                    this problem.<br>
                                    <br>
                                    Currently all VM images appear to be
                                    OK but prior to create the<br>
                                    'entry-changes' I would like to ask
                                    if this is still the correct<br>
                                    procedure to fix this issue<br>
                                  </blockquote>
                                  <br>
                                </div>
                              </div>
                              Did you restart the bricks after the
                              upgrade? That should have created the
                              entry-changes directory. Can you kill the
                              brick and restart it and see if the dir is
                              created? Double check from the brick logs
                              that you're indeed running 3.12:  "Started
                              running /usr/local/sbin/glusterfsd version
                              3.8.12" should appear when the brick
                              starts.<br>
                            </blockquote>
                            <div><br>
                            </div>
                            <div>Please note that if you are going the
                              route of killing and restarting, you need
                              to do it in the same way you did rolling
                              upgrade. You need to wait for heal to
                              complete before you kill the other nodes.
                              But before you do this, it is better you
                              look at the logs or confirm the steps you
                              used for doing upgrade.<br>
                            </div>
                            <div> </div>
                            <blockquote class="gmail_quote"
                              style="margin:0 0 0 .8ex;border-left:1px
                              #ccc solid;padding-left:1ex"> <br>
                              -Ravi
                              <div class="m_3264995662940995335HOEnZb">
                                <div class="m_3264995662940995335h5"><br>
                                  <br>
                                  <blockquote class="gmail_quote"
                                    style="margin:0 0 0
                                    .8ex;border-left:1px #ccc
                                    solid;padding-left:1ex">   and if
                                    this problem could have affected the<br>
                                    heal operations occurred meanwhile.<br>
                                    <br>
                                    Thanks.<br>
                                    <br>
                                    <br>
                                    Greetings,<br>
                                    <br>
                                         Paolo Margara<br>
                                    <br>
                                    ______________________________<wbr>_________________<br>
                                    Gluster-users mailing list<br>
                                    <a
                                      href="mailto:Gluster-users@gluster.org"
                                      target="_blank"
                                      moz-do-not-send="true">Gluster-users@gluster.org</a><br>
                                    <a
                                      href="http://lists.gluster.org/mailman/listinfo/gluster-users"
                                      rel="noreferrer" target="_blank"
                                      moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
                                  </blockquote>
                                  <br>
                                  <br>
                                  ______________________________<wbr>_________________<br>
                                  Gluster-users mailing list<br>
                                  <a
                                    href="mailto:Gluster-users@gluster.org"
                                    target="_blank"
                                    moz-do-not-send="true">Gluster-users@gluster.org</a><br>
                                  <a
                                    href="http://lists.gluster.org/mailman/listinfo/gluster-users"
                                    rel="noreferrer" target="_blank"
                                    moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
                                </div>
                              </div>
                            </blockquote>
                          </div>
                          <br>
                          <br clear="all">
                          <br>
                          -- <br>
                          <div
                            class="m_3264995662940995335gmail_signature"
                            data-smartmail="gmail_signature">
                            <div dir="ltr">Pranith<br>
                            </div>
                          </div>
                        </div>
                      </div>
                    </blockquote>
                  </blockquote>
                </div>
              </div>
            </div>
          </blockquote>
        </div>
        -- <br>
        <div class="gmail_signature" data-smartmail="gmail_signature">
          <div dir="ltr">Pranith<br>
          </div>
        </div>
      </div>
    </blockquote>
  </body>
</html>