<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On 24 May 2017 at 20:02, Mohammed Rafi K C <span dir="ltr">&lt;<a href="mailto:rkavunga@redhat.com" target="_blank">rkavunga@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000"><span class="">
    <p><br>
    </p>
    <br>
    <div class="m_-5833328665865246447moz-cite-prefix">On 05/23/2017 08:53 PM, Mahdi Adnan
      wrote:<br>
    </div>
    <blockquote type="cite">
      
      
      <div id="m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
        <p>Hi,</p>
        <p><br>
        </p>
        <p>I have a distributed volume with 6 bricks, each have 5TB and
          it&#39;s hosting large qcow2 VM disks (I know it&#39;s reliable but
          it&#39;s not important data)</p>
        <p>I started with 5 bricks and then added another one, started
          the re balance process, everything went well, but now im
          looking at the bricks free space and i found one brick is
          around 82% while others ranging from 20% to 60%.</p>
        <p>The brick with highest utilization is hosting more qcow2 disk
          than other bricks, and whenever i start re balance it just
          complete in 0 seconds and without moving any data.</p>
      </div>
    </blockquote>
    <br></span>
    How much is your average file size in the cluster? And number of
    files (roughly) .<span class=""><br>
    <br>
    <br>
    <blockquote type="cite">
      <div id="m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
        <p>What will happen with the brick became full ?</p>
      </div>
    </blockquote></span>
    Once brick contents goes beyond 90%, new files won&#39;t be created in
    the brick. But existing files can grow.<span class=""><br>
    <br>
    <br>
    <blockquote type="cite">
      <div id="m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
        <p>Can i move data manually from one brick to the other ?</p>
      </div>
    </blockquote>
    <br></span>
    Nop.It is not recommended, even though gluster will try to find the
    file, it may break.<span class=""><br>
    <br>
    <br>
    <blockquote type="cite">
      <div id="m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
        <p>Why re balance not distributing data evenly on all bricks ? <br>
        </p>
      </div>
    </blockquote>
    <br></span>
    Rebalance works based on layout, so we need to see how layouts are
    distributed. If one of your bricks has higher capacity, it will have
    larger layout.<br>
    <br>
    <blockquote type="cite"><div><div class="h5">
      <div id="m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
        <p><span style="font-family:arial,sans-serif;font-size:small;color:rgb(34,34,34)"></span></p></div></div></div></blockquote></div></blockquote><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"><blockquote type="cite"><div><div class="h5"><div id="m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr"><p><span style="font-family:arial,sans-serif;font-size:small;color:rgb(34,34,34)">That is correct. As Rafi said, the layout matters here. Can you please send across all the rebalance logs from all the 6 nodes?</span><br></p></div></div></div></blockquote></div></blockquote><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"><blockquote type="cite"><div><div class="h5"><div id="m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr"><p>
        </p>
        <p>Nodes runing CentOS 7.3</p>
        <p>Gluster 3.8.11</p>
        <p><br>
        </p>
        <p>Volume info;</p>
        <div>Volume Name: ctvvols</div>
        <div>Type: Distribute</div>
        <div>Volume ID: 1ecea912-510f-4079-b437-<wbr>7398e9caa0eb</div>
        <div>Status: Started</div>
        <div>Snapshot Count: 0</div>
        <div>Number of Bricks: 6</div>
        <div>Transport-type: tcp</div>
        <div>Bricks:</div>
        <div>Brick1: ctv01:/vols/ctvvols</div>
        <div>Brick2: ctv02:/vols/ctvvols</div>
        <div>Brick3: ctv03:/vols/ctvvols</div>
        <div>Brick4: ctv04:/vols/ctvvols</div>
        <div>Brick5: ctv05:/vols/ctvvols</div>
        <div>Brick6: ctv06:/vols/ctvvols</div>
        <div>Options Reconfigured:</div>
        <div>nfs.disable: on</div>
        <div>performance.readdir-ahead: on</div>
        <div>transport.address-family: inet</div>
        <div>performance.quick-read: off</div>
        <div>performance.read-ahead: off</div>
        <div>performance.io-cache: off</div>
        <div>performance.stat-prefetch: off</div>
        <div>performance.low-prio-threads: 32</div>
        <div>network.remote-dio: enable</div>
        <div>cluster.eager-lock: enable</div>
        <div>cluster.quorum-type: none</div>
        <div>cluster.server-quorum-type: server</div>
        <div>cluster.data-self-heal-<wbr>algorithm: full</div>
        <div>cluster.locking-scheme: granular</div>
        <div>cluster.shd-max-threads: 8</div>
        <div>cluster.shd-wait-qlength: 10000</div>
        <div>features.shard: off</div>
        <div>user.cifs: off</div>
        <div>network.ping-timeout: 10</div>
        <div>storage.owner-uid: 36</div>
        <div>storage.owner-gid: 36</div>
        <div><br>
        </div>
        <div><br>
        </div>
        <p>re balance log:</p>
        <p><br>
        </p>
        <div>[2017-05-23 14:45:12.637671] I
          [dht-rebalance.c:2866:gf_<wbr>defrag_process_dir] 0-ctvvols-dht:
          Migration operation on dir
/31e0b341-4eeb-4b71-b280-<wbr>840eba7d6940/images/690c728d-<wbr>a83e-4c79-ac7d-1f3f17edf7f0
          took 0.00 secs</div>
        <div>[2017-05-23 14:45:12.640043] I [MSGID: 109081]
          [dht-common.c:4202:dht_<wbr>setxattr] 0-ctvvols-dht: fixing the
          layout of
/31e0b341-4eeb-4b71-b280-<wbr>840eba7d6940/images/091402ba-<wbr>dc90-4206-848a-d73e85a1cc35</div>
        <div>[2017-05-23 14:45:12.641516] I
          [dht-rebalance.c:2652:gf_<wbr>defrag_process_dir] 0-ctvvols-dht:
          migrate data called on
/31e0b341-4eeb-4b71-b280-<wbr>840eba7d6940/images/091402ba-<wbr>dc90-4206-848a-d73e85a1cc35</div>
        <div>[2017-05-23 14:45:12.642421] I
          [dht-rebalance.c:2866:gf_<wbr>defrag_process_dir] 0-ctvvols-dht:
          Migration operation on dir
/31e0b341-4eeb-4b71-b280-<wbr>840eba7d6940/images/091402ba-<wbr>dc90-4206-848a-d73e85a1cc35
          took 0.00 secs</div>
        <div>[2017-05-23 14:45:12.645610] I [MSGID: 109081]
          [dht-common.c:4202:dht_<wbr>setxattr] 0-ctvvols-dht: fixing the
          layout of
/31e0b341-4eeb-4b71-b280-<wbr>840eba7d6940/images/be1e2276-<wbr>d38f-4d90-abf5-de757dd04078</div>
        <div>[2017-05-23 14:45:12.647034] I
          [dht-rebalance.c:2652:gf_<wbr>defrag_process_dir] 0-ctvvols-dht:
          migrate data called on
/31e0b341-4eeb-4b71-b280-<wbr>840eba7d6940/images/be1e2276-<wbr>d38f-4d90-abf5-de757dd04078</div>
        <div>[2017-05-23 14:45:12.647589] I
          [dht-rebalance.c:2866:gf_<wbr>defrag_process_dir] 0-ctvvols-dht:
          Migration operation on dir
/31e0b341-4eeb-4b71-b280-<wbr>840eba7d6940/images/be1e2276-<wbr>d38f-4d90-abf5-de757dd04078
          took 0.00 secs</div>
        <div>[2017-05-23 14:45:12.653291] I
          [dht-rebalance.c:3838:gf_<wbr>defrag_start_crawl] 0-DHT: crawling
          file-system completed</div>
        <div>[2017-05-23 14:45:12.653323] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 23</div>
        <div>[2017-05-23 14:45:12.653508] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 24</div>
        <div>[2017-05-23 14:45:12.653536] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 25</div>
        <div>[2017-05-23 14:45:12.653556] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 26</div>
        <div>[2017-05-23 14:45:12.653580] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 27</div>
        <div>[2017-05-23 14:45:12.653603] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 28</div>
        <div>[2017-05-23 14:45:12.653623] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 29</div>
        <div>[2017-05-23 14:45:12.653638] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 30</div>
        <div>[2017-05-23 14:45:12.653659] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 31</div>
        <div>[2017-05-23 14:45:12.653677] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 32</div>
        <div>[2017-05-23 14:45:12.653692] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 33</div>
        <div>[2017-05-23 14:45:12.653711] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 34</div>
        <div>[2017-05-23 14:45:12.653723] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 35</div>
        <div>[2017-05-23 14:45:12.653739] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 36</div>
        <div>[2017-05-23 14:45:12.653759] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 37</div>
        <div>[2017-05-23 14:45:12.653772] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 38</div>
        <div>[2017-05-23 14:45:12.653789] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 39</div>
        <div>[2017-05-23 14:45:12.653800] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 40</div>
        <div>[2017-05-23 14:45:12.653811] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 41</div>
        <div>[2017-05-23 14:45:12.653822] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 42</div>
        <div>[2017-05-23 14:45:12.653836] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 43</div>
        <div>[2017-05-23 14:45:12.653870] I
          [dht-rebalance.c:2246:gf_<wbr>defrag_task] 0-DHT: Thread wokeup.
          defrag-&gt;current_thread_count: 44</div>
        <div>[2017-05-23 14:45:12.654413] I [MSGID: 109028]
          [dht-rebalance.c:4079:gf_<wbr>defrag_status_get] 0-ctvvols-dht:
          Rebalance is completed. Time taken is 0.00 secs</div>
        <div>[2017-05-23 14:45:12.654428] I [MSGID: 109028]
          [dht-rebalance.c:4083:gf_<wbr>defrag_status_get] 0-ctvvols-dht:
          Files migrated: 0, size: 0, lookups: 15, failures: 0, skipped:
          0</div>
        <div>[2017-05-23 14:45:12.654552] W
          [glusterfsd.c:1327:cleanup_<wbr>and_exit]
          (--&gt;/lib64/libpthread.so.0(+<wbr>0x7dc5) [0x7ff40ff88dc5]
          --&gt;/usr/sbin/glusterfs(<wbr>glusterfs_sigwaiter+0xe5)
          [0x7ff41161acd5]
          --&gt;/usr/sbin/glusterfs(<wbr>cleanup_and_exit+0x6b)
          [0x7ff41161ab4b] ) 0-: received signum (15), shutting down</div>
        <div><br>
        </div>
        <br>
        <p><br>
        </p>
        <p>Appreciate your help</p>
        <p><br>
        </p>
        <div id="m_-5833328665865246447Signature"><br>
          <div class="m_-5833328665865246447ecxmoz-signature">-- <br>
            <br>
            <font color="#3366ff"><font color="#000000">Respectfully<b><br>
                </b><b>Mahdi A. Mahdi</b></font></font><font color="#3366ff"><br>
              <br>
            </font></div>
        </div>
      </div>
      <br>
      <fieldset class="m_-5833328665865246447mimeAttachmentHeader"></fieldset>
      <br>
      </div></div><pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-5833328665865246447moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a class="m_-5833328665865246447moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </div>

<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div></div>