<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    So looking more closely at the trusted.glusterfs.dht attributes from
    the bricks it looks like they cover the entire range... and there is
    no range left for gluster07. <br>
    <br>
    The first 6 bricks range from 0x00000000 to 0xffffffff - so... is
    there a way to re-calculate what the dht values should be? Each of
    the bricks should have a gap <br>
    <br>
    <tt>Gluster05 00000000 -&gt; 2aaaaaa9</tt><tt><br>
    </tt><tt>Gluster06 2aaaaaaa -&gt; 55555553</tt><tt><br>
    </tt><tt>Gluster01 55555554 -&gt; 7ffffffd</tt><tt><br>
    </tt><tt>Gluster02 7ffffffe -&gt; aaaaaaa7</tt><tt><br>
    </tt><tt>Gluster03 aaaaaaa8 -&gt; d5555551</tt><tt><br>
    </tt><tt>Gluster04 d5555552 -&gt; ffffffff<br>
      Gluster07 None<br>
    </tt><br>
    If we split the range into 7 servers that would be a gap of about
    0x24924924 for each server. <br>
    <br>
    Now in terms of the gluster07 brick, about 2 years ago the RAID
    array the brick was stored on became corrupted. I ran the
    remove-brick force command, then provisioned a new server, ran the
    add-brick command and then restored the missing files from backup by
    copying them back to the main gluster mount (not the brick). <br>
    <br>
    It looks like prior to that event this was the layout - which would
    make sense given the equal size of the 7 bricks: <br>
    <br>
    <tt>gluster02.pcic.uvic.ca | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
    </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
    </tt><tt>trusted.glusterfs.dht=0x000000010000000048bfff206d1ffe5f</tt><tt><br>
    </tt><tt><br>
    </tt><tt>gluster05.pcic.uvic.ca | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
    </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
    </tt><tt>trusted.glusterfs.dht=0x0000000100000000b5dffce0da3ffc1f</tt><tt><br>
    </tt><tt><br>
    </tt><tt>gluster04.pcic.uvic.ca | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
    </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
    </tt><tt>trusted.glusterfs.dht=0x0000000100000000917ffda0b5dffcdf</tt><tt><br>
    </tt><tt><br>
    </tt><tt>gluster03.pcic.uvic.ca | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
    </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
    </tt><tt>trusted.glusterfs.dht=0x00000001000000006d1ffe60917ffd9f</tt><tt><br>
    </tt><tt><br>
    </tt><tt>gluster01.pcic.uvic.ca | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
    </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
    </tt><tt>trusted.glusterfs.dht=0x0000000100000000245fffe048bfff1f</tt><tt><br>
    </tt><tt><br>
    </tt><tt>gluster07.pcic.uvic.ca | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
    </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
    </tt><tt>trusted.glusterfs.dht=0x000000010000000000000000245fffdf</tt><tt><br>
    </tt><tt><br>
    </tt><tt>gluster06.pcic.uvic.ca | SUCCESS | rc=0 &gt;&gt;</tt><tt><br>
    </tt><tt># file: /mnt/raid6-storage/storage</tt><tt><br>
    </tt><tt>trusted.glusterfs.dht=0x0000000100000000da3ffc20ffffffff</tt><br>
    <br>
    Which yields the following: <br>
    <tt><br>
    </tt><tt>00000000 -&gt; 245fffdf    Gluster07</tt><tt><br>
    </tt><tt>245fffe0 -&gt; 48bfff1f    Gluster01</tt><tt><br>
    </tt><tt>48bfff20 -&gt; 6d1ffe5f    Gluster02</tt><tt><br>
    </tt><tt>6d1ffe60 -&gt; 917ffd9f    Gluster03</tt><tt><br>
    </tt><tt>917ffda0 -&gt; b5dffcdf    Gluster04</tt><tt><br>
    </tt><tt>b5dffce0 -&gt; da3ffc1f    Gluster05</tt><tt><br>
    </tt><tt>da3ffc20 -&gt; ffffffff    Gluster06</tt><br>
    <br>
    Is there some way to get back to this? <br>
    <br>
    Thanks,<br>
     -Matthew<br>
    <div class="moz-signature"><font size="-1">
        <p>--<br>
          Matthew Benstead<br>
          <font size="-2">System Administrator<br>
            <a href="https://pacificclimate.org/">Pacific Climate
              Impacts Consortium</a><br>
            University of Victoria, UH1<br>
            PO Box 1800, STN CSC<br>
            Victoria, BC, V8W 2Y2<br>
            Phone: +1-250-721-8432<br>
            Email: <a class="moz-txt-link-abbreviated" href="mailto:matthewb@uvic.ca">matthewb@uvic.ca</a></font></p>
      </font>
    </div>
    <div class="moz-cite-prefix">On 7/18/19 7:20 AM, Matthew Benstead
      wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:d36b3c61-53fc-638f-212b-fb6079b7d25e@uvic.ca"> Hi
      Nithya, <br>
      <br>
      No - it was added about a year and a half ago. I have tried
      re-mounting the volume on the server, but it didn't add the attr:
      <br>
      <br>
      <tt>[root@gluster07 ~]# umount /storage/<br>
        [root@gluster07 ~]# cat /etc/fstab | grep "/storage"</tt><tt><br>
      </tt><tt>10.0.231.56:/storage /storage glusterfs
        defaults,log-level=WARNING,backupvolfile-server=10.0.231.51 0 0</tt><tt><br>
      </tt><tt>[root@gluster07 ~]# mount /storage/</tt><tt><br>
      </tt><tt>[root@gluster07 ~]# df -h /storage/</tt><tt><br>
      </tt><tt>Filesystem            Size  Used Avail Use% Mounted on</tt><tt><br>
      </tt><tt>10.0.231.56:/storage  255T  194T   62T  77% /storage</tt><tt><br>
      </tt><tt>[root@gluster07 ~]# getfattr --absolute-names -m . -d -e
        hex /mnt/raid6-storage/storage/ </tt><tt><br>
      </tt><tt># file: /mnt/raid6-storage/storage/</tt><tt><br>
      </tt><tt>security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000</tt><tt><br>
      </tt><tt>trusted.gfid=0x00000000000000000000000000000001</tt><tt><br>
      </tt><tt>trusted.glusterfs.6f95525a-94d7-4174-bac4-e1a18fe010a2.xtime=0x5d307baa00023ec0</tt><tt><br>
      </tt><tt>trusted.glusterfs.quota.dirty=0x3000</tt><tt><br>
      </tt><tt>trusted.glusterfs.quota.size.2=0x00001b71d5279e000000000000763e32000000000005cd53</tt><tt><br>
      </tt><tt>trusted.glusterfs.volume-id=0x6f95525a94d74174bac4e1a18fe010a2</tt><br>
      <br>
      Thanks,<br>
       -Matthew<br>
      <div class="moz-cite-prefix"><br>
        On 7/17/19 10:04 PM, Nithya Balachandran wrote:<br>
      </div>
      <blockquote type="cite"
cite="mid:CAOUCJ=hjQQ1q8-nOU2qqOWfmR81mdi+zLSYJKVX1j598hDg97A@mail.gmail.com">
        <div dir="ltr">Hi Matthew,
          <div><br>
          </div>
          <div>Was this node/brick added to the volume recently? If yes,
            try mounting the volume on a fresh mount point - that should
            create the xattr on this as well.</div>
          <div><br>
          </div>
          <div>Regards,</div>
          <div>Nithya</div>
        </div>
        <br>
        <div class="gmail_quote">
          <div dir="ltr" class="gmail_attr">On Wed, 17 Jul 2019 at
            21:01, Matthew Benstead &lt;<a
              href="mailto:matthewb@uvic.ca" moz-do-not-send="true">matthewb@uvic.ca</a>&gt;
            wrote:<br>
          </div>
          <blockquote class="gmail_quote">Hello,<br>
            <br>
            I've just noticed one brick in my 7 node distribute volume
            is missing<br>
            the trusted.glusterfs.dht xattr...? How can I fix this?<br>
            <br>
            I'm running glusterfs-5.3-2.el7.x86_64 on CentOS 7.<br>
            <br>
            All of the other nodes are fine, but gluster07 from the list
            below does<br>
            not have the attribute.<br>
            <br>
            $ ansible -i hosts gluster-servers[0:6] ... -m shell -a
            "getfattr -m .<br>
            --absolute-names -n trusted.glusterfs.dht -e hex<br>
            /mnt/raid6-storage/storage"<br>
            ...<br>
            gluster05 | SUCCESS | rc=0 &gt;&gt;<br>
            # file: /mnt/raid6-storage/storage<br>
            trusted.glusterfs.dht=0x0000000100000000000000002aaaaaa9<br>
            <br>
            gluster03 | SUCCESS | rc=0 &gt;&gt;<br>
            # file: /mnt/raid6-storage/storage<br>
            trusted.glusterfs.dht=0x0000000100000000aaaaaaa8d5555551<br>
            <br>
            gluster04 | SUCCESS | rc=0 &gt;&gt;<br>
            # file: /mnt/raid6-storage/storage<br>
            trusted.glusterfs.dht=0x0000000100000000d5555552ffffffff<br>
            <br>
            gluster06 | SUCCESS | rc=0 &gt;&gt;<br>
            # file: /mnt/raid6-storage/storage<br>
            trusted.glusterfs.dht=0x00000001000000002aaaaaaa55555553<br>
            <br>
            gluster02 | SUCCESS | rc=0 &gt;&gt;<br>
            # file: /mnt/raid6-storage/storage<br>
            trusted.glusterfs.dht=0x00000001000000007ffffffeaaaaaaa7<br>
            <br>
            gluster07 | FAILED | rc=1 &gt;&gt;<br>
            /mnt/raid6-storage/storage: trusted.glusterfs.dht: No such<br>
            attributenon-zero return code<br>
            <br>
            gluster01 | SUCCESS | rc=0 &gt;&gt;<br>
            # file: /mnt/raid6-storage/storage<br>
            trusted.glusterfs.dht=0x0000000100000000555555547ffffffd<br>
            <br>
            Here are all of the attr's from the brick:<br>
            <br>
            [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex<br>
            /mnt/raid6-storage/storage/<br>
            # file: /mnt/raid6-storage/storage/<br>
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000<br>
            trusted.gfid=0x00000000000000000000000000000001<br>
trusted.glusterfs.6f95525a-94d7-4174-bac4-e1a18fe010a2.xtime=0x5d2dee800001fdf9<br>
            trusted.glusterfs.quota.dirty=0x3000<br>
trusted.glusterfs.quota.size.2=0x00001b69498a1400000000000076332e000000000005cd03<br>
trusted.glusterfs.volume-id=0x6f95525a94d74174bac4e1a18fe010a2<br>
            <br>
            <br>
            And here is the volume information:<br>
            <br>
            [root@gluster07 ~]# gluster volume info storage<br>
            <br>
            Volume Name: storage<br>
            Type: Distribute<br>
            Volume ID: 6f95525a-94d7-4174-bac4-e1a18fe010a2<br>
            Status: Started<br>
            Snapshot Count: 0<br>
            Number of Bricks: 7<br>
            Transport-type: tcp<br>
            Bricks:<br>
            Brick1: 10.0.231.50:/mnt/raid6-storage/storage<br>
            Brick2: 10.0.231.51:/mnt/raid6-storage/storage<br>
            Brick3: 10.0.231.52:/mnt/raid6-storage/storage<br>
            Brick4: 10.0.231.53:/mnt/raid6-storage/storage<br>
            Brick5: 10.0.231.54:/mnt/raid6-storage/storage<br>
            Brick6: 10.0.231.55:/mnt/raid6-storage/storage<br>
            Brick7: 10.0.231.56:/mnt/raid6-storage/storage<br>
            Options Reconfigured:<br>
            changelog.changelog: on<br>
            features.quota-deem-statfs: on<br>
            features.read-only: off<br>
            features.inode-quota: on<br>
            features.quota: on<br>
            performance.readdir-ahead: on<br>
            nfs.disable: on<br>
            geo-replication.indexing: on<br>
            geo-replication.ignore-pid-check: on<br>
            transport.address-family: inet<br>
            <br>
            Thanks,<br>
             -Matthew<br>
            _______________________________________________<br>
            Gluster-users mailing list<br>
            <a href="mailto:Gluster-users@gluster.org" target="_blank"
              moz-do-not-send="true">Gluster-users@gluster.org</a><br>
            <a
              href="https://lists.gluster.org/mailman/listinfo/gluster-users"
              rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
          </blockquote>
        </div>
      </blockquote>
      <br>
    </blockquote>
    <br>
  </body>
</html>