<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p><br>
</p>
<br>
<div class="moz-cite-prefix">On 05/23/2017 08:53 PM, Mahdi Adnan
wrote:<br>
</div>
<blockquote
cite="mid:DM5PR01MB25064CB3AAB20DAA761F30E8FFF90@DM5PR01MB2506.prod.exchangelabs.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<style type="text/css" style="display:none;"><!-- P {margin-top:0;margin-bottom:0;} --></style>
<div id="divtagdefaultwrapper"
style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif;"
dir="ltr">
<p>Hi,</p>
<p><br>
</p>
<p>I have a distributed volume with 6 bricks, each have 5TB and
it's hosting large qcow2 VM disks (I know it's reliable but
it's not important data)</p>
<p>I started with 5 bricks and then added another one, started
the re balance process, everything went well, but now im
looking at the bricks free space and i found one brick is
around 82% while others ranging from 20% to 60%.</p>
<p>The brick with highest utilization is hosting more qcow2 disk
than other bricks, and whenever i start re balance it just
complete in 0 seconds and without moving any data.</p>
</div>
</blockquote>
<br>
How much is your average file size in the cluster? And number of
files (roughly) .<br>
<br>
<br>
<blockquote
cite="mid:DM5PR01MB25064CB3AAB20DAA761F30E8FFF90@DM5PR01MB2506.prod.exchangelabs.com"
type="cite">
<div id="divtagdefaultwrapper"
style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif;"
dir="ltr">
<p>What will happen with the brick became full ?</p>
</div>
</blockquote>
Once brick contents goes beyond 90%, new files won't be created in
the brick. But existing files can grow.<br>
<br>
<br>
<blockquote
cite="mid:DM5PR01MB25064CB3AAB20DAA761F30E8FFF90@DM5PR01MB2506.prod.exchangelabs.com"
type="cite">
<div id="divtagdefaultwrapper"
style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif;"
dir="ltr">
<p>Can i move data manually from one brick to the other ?</p>
</div>
</blockquote>
<br>
Nop.It is not recommended, even though gluster will try to find the
file, it may break.<br>
<br>
<br>
<blockquote
cite="mid:DM5PR01MB25064CB3AAB20DAA761F30E8FFF90@DM5PR01MB2506.prod.exchangelabs.com"
type="cite">
<div id="divtagdefaultwrapper"
style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif;"
dir="ltr">
<p>Why re balance not distributing data evenly on all bricks ? <br>
</p>
</div>
</blockquote>
<br>
Rebalance works based on layout, so we need to see how layouts are
distributed. If one of your bricks has higher capacity, it will have
larger layout.<br>
<br>
<blockquote
cite="mid:DM5PR01MB25064CB3AAB20DAA761F30E8FFF90@DM5PR01MB2506.prod.exchangelabs.com"
type="cite">
<div id="divtagdefaultwrapper"
style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif;"
dir="ltr">
<p><br>
</p>
<p>Nodes runing CentOS 7.3</p>
<p>Gluster 3.8.11</p>
<p><br>
</p>
<p>Volume info;</p>
<div>Volume Name: ctvvols</div>
<div>Type: Distribute</div>
<div>Volume ID: 1ecea912-510f-4079-b437-7398e9caa0eb</div>
<div>Status: Started</div>
<div>Snapshot Count: 0</div>
<div>Number of Bricks: 6</div>
<div>Transport-type: tcp</div>
<div>Bricks:</div>
<div>Brick1: ctv01:/vols/ctvvols</div>
<div>Brick2: ctv02:/vols/ctvvols</div>
<div>Brick3: ctv03:/vols/ctvvols</div>
<div>Brick4: ctv04:/vols/ctvvols</div>
<div>Brick5: ctv05:/vols/ctvvols</div>
<div>Brick6: ctv06:/vols/ctvvols</div>
<div>Options Reconfigured:</div>
<div>nfs.disable: on</div>
<div>performance.readdir-ahead: on</div>
<div>transport.address-family: inet</div>
<div>performance.quick-read: off</div>
<div>performance.read-ahead: off</div>
<div>performance.io-cache: off</div>
<div>performance.stat-prefetch: off</div>
<div>performance.low-prio-threads: 32</div>
<div>network.remote-dio: enable</div>
<div>cluster.eager-lock: enable</div>
<div>cluster.quorum-type: none</div>
<div>cluster.server-quorum-type: server</div>
<div>cluster.data-self-heal-algorithm: full</div>
<div>cluster.locking-scheme: granular</div>
<div>cluster.shd-max-threads: 8</div>
<div>cluster.shd-wait-qlength: 10000</div>
<div>features.shard: off</div>
<div>user.cifs: off</div>
<div>network.ping-timeout: 10</div>
<div>storage.owner-uid: 36</div>
<div>storage.owner-gid: 36</div>
<div><br>
</div>
<div><br>
</div>
<p>re balance log:</p>
<p><br>
</p>
<div>[2017-05-23 14:45:12.637671] I
[dht-rebalance.c:2866:gf_defrag_process_dir] 0-ctvvols-dht:
Migration operation on dir
/31e0b341-4eeb-4b71-b280-840eba7d6940/images/690c728d-a83e-4c79-ac7d-1f3f17edf7f0
took 0.00 secs</div>
<div>[2017-05-23 14:45:12.640043] I [MSGID: 109081]
[dht-common.c:4202:dht_setxattr] 0-ctvvols-dht: fixing the
layout of
/31e0b341-4eeb-4b71-b280-840eba7d6940/images/091402ba-dc90-4206-848a-d73e85a1cc35</div>
<div>[2017-05-23 14:45:12.641516] I
[dht-rebalance.c:2652:gf_defrag_process_dir] 0-ctvvols-dht:
migrate data called on
/31e0b341-4eeb-4b71-b280-840eba7d6940/images/091402ba-dc90-4206-848a-d73e85a1cc35</div>
<div>[2017-05-23 14:45:12.642421] I
[dht-rebalance.c:2866:gf_defrag_process_dir] 0-ctvvols-dht:
Migration operation on dir
/31e0b341-4eeb-4b71-b280-840eba7d6940/images/091402ba-dc90-4206-848a-d73e85a1cc35
took 0.00 secs</div>
<div>[2017-05-23 14:45:12.645610] I [MSGID: 109081]
[dht-common.c:4202:dht_setxattr] 0-ctvvols-dht: fixing the
layout of
/31e0b341-4eeb-4b71-b280-840eba7d6940/images/be1e2276-d38f-4d90-abf5-de757dd04078</div>
<div>[2017-05-23 14:45:12.647034] I
[dht-rebalance.c:2652:gf_defrag_process_dir] 0-ctvvols-dht:
migrate data called on
/31e0b341-4eeb-4b71-b280-840eba7d6940/images/be1e2276-d38f-4d90-abf5-de757dd04078</div>
<div>[2017-05-23 14:45:12.647589] I
[dht-rebalance.c:2866:gf_defrag_process_dir] 0-ctvvols-dht:
Migration operation on dir
/31e0b341-4eeb-4b71-b280-840eba7d6940/images/be1e2276-d38f-4d90-abf5-de757dd04078
took 0.00 secs</div>
<div>[2017-05-23 14:45:12.653291] I
[dht-rebalance.c:3838:gf_defrag_start_crawl] 0-DHT: crawling
file-system completed</div>
<div>[2017-05-23 14:45:12.653323] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 23</div>
<div>[2017-05-23 14:45:12.653508] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 24</div>
<div>[2017-05-23 14:45:12.653536] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 25</div>
<div>[2017-05-23 14:45:12.653556] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 26</div>
<div>[2017-05-23 14:45:12.653580] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 27</div>
<div>[2017-05-23 14:45:12.653603] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 28</div>
<div>[2017-05-23 14:45:12.653623] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 29</div>
<div>[2017-05-23 14:45:12.653638] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 30</div>
<div>[2017-05-23 14:45:12.653659] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 31</div>
<div>[2017-05-23 14:45:12.653677] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 32</div>
<div>[2017-05-23 14:45:12.653692] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 33</div>
<div>[2017-05-23 14:45:12.653711] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 34</div>
<div>[2017-05-23 14:45:12.653723] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 35</div>
<div>[2017-05-23 14:45:12.653739] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 36</div>
<div>[2017-05-23 14:45:12.653759] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 37</div>
<div>[2017-05-23 14:45:12.653772] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 38</div>
<div>[2017-05-23 14:45:12.653789] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 39</div>
<div>[2017-05-23 14:45:12.653800] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 40</div>
<div>[2017-05-23 14:45:12.653811] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 41</div>
<div>[2017-05-23 14:45:12.653822] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 42</div>
<div>[2017-05-23 14:45:12.653836] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 43</div>
<div>[2017-05-23 14:45:12.653870] I
[dht-rebalance.c:2246:gf_defrag_task] 0-DHT: Thread wokeup.
defrag->current_thread_count: 44</div>
<div>[2017-05-23 14:45:12.654413] I [MSGID: 109028]
[dht-rebalance.c:4079:gf_defrag_status_get] 0-ctvvols-dht:
Rebalance is completed. Time taken is 0.00 secs</div>
<div>[2017-05-23 14:45:12.654428] I [MSGID: 109028]
[dht-rebalance.c:4083:gf_defrag_status_get] 0-ctvvols-dht:
Files migrated: 0, size: 0, lookups: 15, failures: 0, skipped:
0</div>
<div>[2017-05-23 14:45:12.654552] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7ff40ff88dc5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7ff41161acd5]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7ff41161ab4b] ) 0-: received signum (15), shutting down</div>
<div><br>
</div>
<br>
<p><br>
</p>
<p>Appreciate your help</p>
<p><br>
</p>
<div id="Signature"><br>
<div class="ecxmoz-signature">-- <br>
<br>
<font color="#3366ff"><font color="#000000">Respectfully<b><br>
</b><b>Mahdi A. Mahdi</b></font></font><font
color="#3366ff"><br>
<br>
</font></div>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</body>
</html>