<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 26 Jul 2019 at 01:56, Matthew Benstead &lt;<a href="mailto:matthewb@uvic.ca">matthewb@uvic.ca</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF">
    Hi Nithya, <br>
    <br>
    Hmm... I don&#39;t remember if I did, but based on what I&#39;m seeing it
    sounds like I probably didn&#39;t run rebalance or fix-layout. <br>
    <br>
    It looks like folders that haven&#39;t had any new files created have a
    dht of 0, while other folders have non-zero values. <br>
    <br>
    <tt>[root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex
      /mnt/raid6-storage/storage/ | grep dht</tt><tt><br>
    </tt><tt>[root@gluster07 ~]# getfattr --absolute-names -m . -d -e
      hex /mnt/raid6-storage/storage/home | grep dht</tt><tt><br>
    </tt><tt>trusted.glusterfs.dht=0x00000000000000000000000000000000</tt><tt><br>
    </tt><tt>[root@gluster07 ~]# getfattr --absolute-names -m . -d -e
      hex /mnt/raid6-storage/storage/home/matthewb | grep dht</tt><tt><br>
    </tt><tt>trusted.glusterfs.dht=0x00000001000000004924921a6db6dbc7</tt><br>
    <br>
    If I just run the fix-layout command will it re-create all of the
    dht values or just the missing ones? </div></blockquote><div><br></div><div>A fix-layout will recalculate the layouts entirely so files all the values will change. No files will be moved.</div><div>A rebalance will recalculate the layouts like the fix-layout but will also move files to their new locations based on the new layout ranges. This could take a lot of time depending on the number of files/directories on the volume. If you do this, I would recommend that you turn off lookup-optimize until the rebalance is over.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF">Since the brick is already
    fairly size balanced could I get away with running fix-layout but
    not rebalance? Or would the new dht layout mean slower accesses
    since the files may be expected on different bricks? <br></div></blockquote><div> </div><div>The first access for a file will be slower. The next one will be faster as the location will be cached in the client&#39;s in-memory structures.</div><div>You may not need to run either a fix-layout or a rebalance if new file creations will be in directories created after the add-brick. Gluster will automatically include all 7 bricks for those directories.</div><div><br></div><div>Regards,</div><div>Nithya</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF">
    <br>
    Thanks,<br>
     -Matthew<br>
    <div class="gmail-m_-8830747943370208428moz-signature"><font size="-1">
        <p>--<br>
          Matthew Benstead<br>
          <font size="-2">System Administrator<br>
            <a href="https://pacificclimate.org/" target="_blank">Pacific Climate
              Impacts Consortium</a><br>
            University of Victoria, UH1<br>
            PO Box 1800, STN CSC<br>
            Victoria, BC, V8W 2Y2<br>
            Phone: +1-250-721-8432<br>
            Email: <a class="gmail-m_-8830747943370208428moz-txt-link-abbreviated" href="mailto:matthewb@uvic.ca" target="_blank">matthewb@uvic.ca</a></font></p>
      </font>
    </div>
    <div class="gmail-m_-8830747943370208428moz-cite-prefix">On 7/24/19 9:30 PM, Nithya Balachandran
      wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">
        <div dir="ltr"><br>
        </div>
        <br>
        <div class="gmail_quote">
          <div dir="ltr" class="gmail_attr">On Wed, 24 Jul 2019 at
            22:12, Matthew Benstead &lt;<a href="mailto:matthewb@uvic.ca" target="_blank">matthewb@uvic.ca</a>&gt;
            wrote:<br>
          </div>
          <blockquote class="gmail_quote">
            <div> So looking more closely at the trusted.glusterfs.dht
              attributes from the bricks it looks like they cover the
              entire range... and there is no range left for gluster07.
              <br>
              <br>
              The first 6 bricks range from 0x00000000 to 0xffffffff -
              so... is there a way to re-calculate what the dht values
              should be? Each of the bricks should have a gap <br>
              <br>
              <tt>Gluster05 00000000 -&gt; 2aaaaaa9</tt><tt><br>
              </tt><tt>Gluster06 2aaaaaaa -&gt; 55555553</tt><tt><br>
              </tt><tt>Gluster01 55555554 -&gt; 7ffffffd</tt><tt><br>
              </tt><tt>Gluster02 7ffffffe -&gt; aaaaaaa7</tt><tt><br>
              </tt><tt>Gluster03 aaaaaaa8 -&gt; d5555551</tt><tt><br>
              </tt><tt>Gluster04 d5555552 -&gt; ffffffff<br>
                Gluster07 None<br>
              </tt><br>
              If we split the range into 7 servers that would be a gap
              of about 0x24924924 for each server. <br>
              <br>
              Now in terms of the gluster07 brick, about 2 years ago the
              RAID array the brick was stored on became corrupted. I ran
              the remove-brick force command, then provisioned a new
              server, ran the add-brick command and then restored the
              missing files from backup by copying them back to the main
              gluster mount (not the brick). <br>
              <br>
            </div>
          </blockquote>
          <div><br>
          </div>
          <div>Did you run a rebalance after performing the add-brick?
            Without a rebalance/fix-layout , the layout for existing
            directories on the volume will not  be updated to use the
            new brick as well.</div>
          <div><br>
          </div>
          <div>That the layout does not include the new brick in the
            root dir is in itself is not a problem. Do you create a lot
            of files directly in the root of the volume? If yes, you
            might want to run a rebalance. Otherwise, if you mostly
            create files in newly added directories, you can probably
            ignore this. You can check the layout for directories on the
            volume and see if they incorporate the brick7.</div>
          <div><br>
          </div>
          <div>I would expect a lookup on the root to have set an xattr
            on the brick with an empty layout range . The fact that the
            xattr does not exist at all on the brick is what I am
            looking into.</div>
          <div><br>
          </div>
          <div><br>
          </div>
          <blockquote class="gmail_quote">
            <div> It looks like prior to that event this was the layout
              - which would make sense given the equal size of the 7
              bricks: <br>
              <br>
              <tt><a href="http://gluster02.pcic.uvic.ca" target="_blank">gluster02.pcic.uvic.ca</a>
                | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
              </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
              </tt><tt>trusted.glusterfs.dht=0x000000010000000048bfff206d1ffe5f</tt><tt><br>
              </tt><tt><br>
              </tt><tt><a href="http://gluster05.pcic.uvic.ca" target="_blank">gluster05.pcic.uvic.ca</a>
                | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
              </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
              </tt><tt>trusted.glusterfs.dht=0x0000000100000000b5dffce0da3ffc1f</tt><tt><br>
              </tt><tt><br>
              </tt><tt><a href="http://gluster04.pcic.uvic.ca" target="_blank">gluster04.pcic.uvic.ca</a>
                | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
              </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
              </tt><tt>trusted.glusterfs.dht=0x0000000100000000917ffda0b5dffcdf</tt><tt><br>
              </tt><tt><br>
              </tt><tt><a href="http://gluster03.pcic.uvic.ca" target="_blank">gluster03.pcic.uvic.ca</a>
                | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
              </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
              </tt><tt>trusted.glusterfs.dht=0x00000001000000006d1ffe60917ffd9f</tt><tt><br>
              </tt><tt><br>
              </tt><tt><a href="http://gluster01.pcic.uvic.ca" target="_blank">gluster01.pcic.uvic.ca</a>
                | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
              </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
              </tt><tt>trusted.glusterfs.dht=0x0000000100000000245fffe048bfff1f</tt><tt><br>
              </tt><tt><br>
              </tt><tt><a href="http://gluster07.pcic.uvic.ca" target="_blank">gluster07.pcic.uvic.ca</a>
                | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
              </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
              </tt><tt>trusted.glusterfs.dht=0x000000010000000000000000245fffdf</tt><tt><br>
              </tt><tt><br>
              </tt><tt><a href="http://gluster06.pcic.uvic.ca" target="_blank">gluster06.pcic.uvic.ca</a>
                | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
              </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
              </tt><tt>trusted.glusterfs.dht=0x0000000100000000da3ffc20ffffffff</tt><br>
              <br>
              Which yields the following: <br>
              <tt><br>
              </tt><tt>00000000 -&gt; 245fffdf    Gluster07</tt><tt><br>
              </tt><tt>245fffe0 -&gt; 48bfff1f    Gluster01</tt><tt><br>
              </tt><tt>48bfff20 -&gt; 6d1ffe5f    Gluster02</tt><tt><br>
              </tt><tt>6d1ffe60 -&gt; 917ffd9f    Gluster03</tt><tt><br>
              </tt><tt>917ffda0 -&gt; b5dffcdf    Gluster04</tt><tt><br>
              </tt><tt>b5dffce0 -&gt; da3ffc1f    Gluster05</tt><tt><br>
              </tt><tt>da3ffc20 -&gt; ffffffff    Gluster06</tt><br>
              <br>
              Is there some way to get back to this? <br>
              <br>
              Thanks,<br>
               -Matthew<br>
              <div class="gmail-m_-8830747943370208428gmail-m_1525309864095730869moz-signature">
                <p>--<br>
                  Matthew Benstead<br>
                  System Administrator<br>
                  <a href="https://pacificclimate.org/" target="_blank">Pacific Climate Impacts
                    Consortium</a><br>
                  University of Victoria, UH1<br>
                  PO Box 1800, STN CSC<br>
                  Victoria, BC, V8W 2Y2<br>
                  Phone: +1-250-721-8432<br>
                  Email: <a class="gmail-m_-8830747943370208428gmail-m_1525309864095730869moz-txt-link-abbreviated" href="mailto:matthewb@uvic.ca" target="_blank">matthewb@uvic.ca</a></p>
              </div>
              <div class="gmail-m_-8830747943370208428gmail-m_1525309864095730869moz-cite-prefix">On
                7/18/19 7:20 AM, Matthew Benstead wrote:<br>
              </div>
              <blockquote type="cite"> Hi Nithya, <br>
                <br>
                No - it was added about a year and a half ago. I have
                tried re-mounting the volume on the server, but it
                didn&#39;t add the attr: <br>
                <br>
                <tt>[root@gluster07 ~]# umount /storage/<br>
                  [root@gluster07 ~]# cat /etc/fstab | grep &quot;/storage&quot;</tt><tt><br>
                </tt><tt>10.0.231.56:/storage /storage glusterfs
                  defaults,log-level=WARNING,backupvolfile-server=10.0.231.51
                  0 0</tt><tt><br>
                </tt><tt>[root@gluster07 ~]# mount /storage/</tt><tt><br>
                </tt><tt>[root@gluster07 ~]# df -h /storage/</tt><tt><br>
                </tt><tt>Filesystem            Size  Used Avail Use%
                  Mounted on</tt><tt><br>
                </tt><tt>10.0.231.56:/storage  255T  194T   62T  77%
                  /storage</tt><tt><br>
                </tt><tt>[root@gluster07 ~]# getfattr --absolute-names
                  -m . -d -e hex /mnt/raid6-storage/storage/ </tt><tt><br>
                </tt><tt># file: /mnt/raid6-storage/storage/</tt><tt><br>
                </tt><tt>security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000</tt><tt><br>
                </tt><tt>trusted.gfid=0x00000000000000000000000000000001</tt><tt><br>
                </tt><tt>trusted.glusterfs.6f95525a-94d7-4174-bac4-e1a18fe010a2.xtime=0x5d307baa00023ec0</tt><tt><br>
                </tt><tt>trusted.glusterfs.quota.dirty=0x3000</tt><tt><br>
                </tt><tt>trusted.glusterfs.quota.size.2=0x00001b71d5279e000000000000763e32000000000005cd53</tt><tt><br>
                </tt><tt>trusted.glusterfs.volume-id=0x6f95525a94d74174bac4e1a18fe010a2</tt><br>
                <br>
                Thanks,<br>
                 -Matthew<br>
                <div class="gmail-m_-8830747943370208428gmail-m_1525309864095730869moz-cite-prefix"><br>
                  On 7/17/19 10:04 PM, Nithya Balachandran wrote:<br>
                </div>
                <blockquote type="cite">
                  <div dir="ltr">Hi Matthew,
                    <div><br>
                    </div>
                    <div>Was this node/brick added to the volume
                      recently? If yes, try mounting the volume on a
                      fresh mount point - that should create the xattr
                      on this as well.</div>
                    <div><br>
                    </div>
                    <div>Regards,</div>
                    <div>Nithya</div>
                  </div>
                  <br>
                  <div class="gmail_quote">
                    <div dir="ltr" class="gmail_attr">On Wed, 17 Jul
                      2019 at 21:01, Matthew Benstead &lt;<a href="mailto:matthewb@uvic.ca" target="_blank">matthewb@uvic.ca</a>&gt;
                      wrote:<br>
                    </div>
                    <blockquote class="gmail_quote">Hello,<br>
                      <br>
                      I&#39;ve just noticed one brick in my 7 node
                      distribute volume is missing<br>
                      the trusted.glusterfs.dht xattr...? How can I fix
                      this?<br>
                      <br>
                      I&#39;m running glusterfs-5.3-2.el7.x86_64 on CentOS
                      7.<br>
                      <br>
                      All of the other nodes are fine, but gluster07
                      from the list below does<br>
                      not have the attribute.<br>
                      <br>
                      $ ansible -i hosts gluster-servers[0:6] ... -m
                      shell -a &quot;getfattr -m .<br>
                      --absolute-names -n trusted.glusterfs.dht -e hex<br>
                      /mnt/raid6-storage/storage&quot;<br>
                      ...<br>
                      gluster05 | SUCCESS | rc=0 &gt;&gt;<br>
                      # file: /mnt/raid6-storage/storage<br>
trusted.glusterfs.dht=0x0000000100000000000000002aaaaaa9<br>
                      <br>
                      gluster03 | SUCCESS | rc=0 &gt;&gt;<br>
                      # file: /mnt/raid6-storage/storage<br>
trusted.glusterfs.dht=0x0000000100000000aaaaaaa8d5555551<br>
                      <br>
                      gluster04 | SUCCESS | rc=0 &gt;&gt;<br>
                      # file: /mnt/raid6-storage/storage<br>
trusted.glusterfs.dht=0x0000000100000000d5555552ffffffff<br>
                      <br>
                      gluster06 | SUCCESS | rc=0 &gt;&gt;<br>
                      # file: /mnt/raid6-storage/storage<br>
trusted.glusterfs.dht=0x00000001000000002aaaaaaa55555553<br>
                      <br>
                      gluster02 | SUCCESS | rc=0 &gt;&gt;<br>
                      # file: /mnt/raid6-storage/storage<br>
trusted.glusterfs.dht=0x00000001000000007ffffffeaaaaaaa7<br>
                      <br>
                      gluster07 | FAILED | rc=1 &gt;&gt;<br>
                      /mnt/raid6-storage/storage: trusted.glusterfs.dht:
                      No such<br>
                      attributenon-zero return code<br>
                      <br>
                      gluster01 | SUCCESS | rc=0 &gt;&gt;<br>
                      # file: /mnt/raid6-storage/storage<br>
trusted.glusterfs.dht=0x0000000100000000555555547ffffffd<br>
                      <br>
                      Here are all of the attr&#39;s from the brick:<br>
                      <br>
                      [root@gluster07 ~]# getfattr --absolute-names -m .
                      -d -e hex<br>
                      /mnt/raid6-storage/storage/<br>
                      # file: /mnt/raid6-storage/storage/<br>
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000<br>
                      trusted.gfid=0x00000000000000000000000000000001<br>
trusted.glusterfs.6f95525a-94d7-4174-bac4-e1a18fe010a2.xtime=0x5d2dee800001fdf9<br>
                      trusted.glusterfs.quota.dirty=0x3000<br>
trusted.glusterfs.quota.size.2=0x00001b69498a1400000000000076332e000000000005cd03<br>
trusted.glusterfs.volume-id=0x6f95525a94d74174bac4e1a18fe010a2<br>
                      <br>
                      <br>
                      And here is the volume information:<br>
                      <br>
                      [root@gluster07 ~]# gluster volume info storage<br>
                      <br>
                      Volume Name: storage<br>
                      Type: Distribute<br>
                      Volume ID: 6f95525a-94d7-4174-bac4-e1a18fe010a2<br>
                      Status: Started<br>
                      Snapshot Count: 0<br>
                      Number of Bricks: 7<br>
                      Transport-type: tcp<br>
                      Bricks:<br>
                      Brick1: 10.0.231.50:/mnt/raid6-storage/storage<br>
                      Brick2: 10.0.231.51:/mnt/raid6-storage/storage<br>
                      Brick3: 10.0.231.52:/mnt/raid6-storage/storage<br>
                      Brick4: 10.0.231.53:/mnt/raid6-storage/storage<br>
                      Brick5: 10.0.231.54:/mnt/raid6-storage/storage<br>
                      Brick6: 10.0.231.55:/mnt/raid6-storage/storage<br>
                      Brick7: 10.0.231.56:/mnt/raid6-storage/storage<br>
                      Options Reconfigured:<br>
                      changelog.changelog: on<br>
                      features.quota-deem-statfs: on<br>
                      features.read-only: off<br>
                      features.inode-quota: on<br>
                      features.quota: on<br>
                      performance.readdir-ahead: on<br>
                      nfs.disable: on<br>
                      geo-replication.indexing: on<br>
                      geo-replication.ignore-pid-check: on<br>
                      transport.address-family: inet<br>
                      <br>
                      Thanks,<br>
                       -Matthew<br>
                      _______________________________________________<br>
                      Gluster-users mailing list<br>
                      <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
                      <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
                    </blockquote>
                  </div>
                </blockquote>
                <br>
              </blockquote>
              <br>
            </div>
          </blockquote>
        </div>
      </div>
    </blockquote>
    <br>
  </div>

</blockquote></div></div>