<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
</head>
<body>
<style type="text/css" style="display:none;"><!-- P {margin-top:0;margin-bottom:0;} --></style>
<div id="divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif;" dir="ltr">
<p>Well yes and no, when i start the re-balance and check it's status, it just tells me it completed the re-balance, but it really did not move any data and the volume is not evenly distributed.</p>
<p>right now brick6 is full, brick 5 is going to be full in few hours or so.</p>
<p><br>
</p>
<div id="Signature"><br>
<div class="ecxmoz-signature">-- <br>
<br>
<font color="#3366ff"><font color="#000000">Respectfully<b><br>
</b><b>Mahdi A. Mahdi</b></font></font><font color="#3366ff"><br>
<br>
</font><font color="#3366ff"></font></div>
</div>
</div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Nithya Balachandran <nbalacha@redhat.com><br>
<b>Sent:</b> Wednesday, May 24, 2017 8:16:53 PM<br>
<b>To:</b> Mahdi Adnan<br>
<b>Cc:</b> Mohammed Rafi K C; gluster-users@gluster.org<br>
<b>Subject:</b> Re: [Gluster-users] Distributed re-balance issue</font>
<div> </div>
</div>
<div>
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 24 May 2017 at 22:45, Nithya Balachandran <span dir="ltr">
<<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote"><span>On 24 May 2017 at 21:55, Mahdi Adnan <span dir="ltr">
<<a href="mailto:mahdi.adnan@outlook.com" target="_blank">mahdi.adnan@outlook.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
<p>Hi,</p>
<p><br>
</p>
<p>Thank you for your response.</p>
<p>I have around 15 files, each is 2TB qcow.</p>
<p>One brick reached 96% so i removed it with "brick remove" and waited until it goes for around 40% and stopped the removal process with brick remove stop.</p>
<p>The issue is brick1 drain it's data to brick6 only, and when brick6 reached around 90% i did the same thing as before and it drained the data to brick1 only.</p>
<p><span style="font-size:12pt">now brick6 reached 99% and i have only a few gigabytes left which will fill in the next half hour or so.</span></p>
<p><span style="font-size:12pt">attached are the logs for all 6 bricks. </span><br>
</p>
<span>
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895Signature">
<br>
</div>
</span></div>
</div>
</blockquote>
</span>
<div>Hi,</div>
<div><br>
</div>
<div>Just to clarify, did you run a rebalance (gluster volume rebalance <vol> start) or did you only run remove-brick ?</div>
<div>
<div class="m_-6727024028091360839h5">
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<div>On re-reading your original email, I see you did run a rebalance. Did it complete? Also which bricks are full at the moment?</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div>
<div class="m_-6727024028091360839h5">
<div></div>
<div><br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
<span>
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895Signature">
<div class="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895ecxmoz-signature">
-- <br>
<br>
<font color="#3366ff"><font color="#000000">Respectfully<b><br>
</b><b>Mahdi A. Mahdi</b></font></font><font color="#3366ff"><br>
<br>
</font><font color="#3366ff"></font></div>
</div>
</span></div>
<hr style="display:inline-block;width:98%">
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895divRplyFwdMsg" dir="ltr">
<font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Nithya Balachandran <<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>><br>
<b>Sent:</b> Wednesday, May 24, 2017 6:45:10 PM<br>
<b>To:</b> Mohammed Rafi K C<br>
<b>Cc:</b> Mahdi Adnan; <a href="mailto:gluster-users@gluster.org" target="_blank">
gluster-users@gluster.org</a><br>
<b>Subject:</b> Re: [Gluster-users] Distributed re-balance issue</font>
<div> </div>
</div>
<div>
<div class="m_-6727024028091360839m_1205392110087603124h5">
<div>
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 24 May 2017 at 20:02, Mohammed Rafi K C <span dir="ltr">
<<a href="mailto:rkavunga@redhat.com" target="_blank">rkavunga@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span>
<p><br>
</p>
<br>
<div class="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447moz-cite-prefix">
On 05/23/2017 08:53 PM, Mahdi Adnan wrote:<br>
</div>
<blockquote type="cite">
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
<p>Hi,</p>
<p><br>
</p>
<p>I have a distributed volume with 6 bricks, each have 5TB and it's hosting large qcow2 VM disks (I know it's reliable but it's not important data)</p>
<p>I started with 5 bricks and then added another one, started the re balance process, everything went well, but now im looking at the bricks free space and i found one brick is around 82% while others ranging from 20% to 60%.</p>
<p>The brick with highest utilization is hosting more qcow2 disk than other bricks, and whenever i start re balance it just complete in 0 seconds and without moving any data.</p>
</div>
</blockquote>
<br>
</span>How much is your average file size in the cluster? And number of files (roughly) .<span><br>
<br>
<br>
<blockquote type="cite">
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
<p>What will happen with the brick became full ?</p>
</div>
</blockquote>
</span>Once brick contents goes beyond 90%, new files won't be created in the brick. But existing files can grow.<span><br>
<br>
<br>
<blockquote type="cite">
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
<p>Can i move data manually from one brick to the other ?</p>
</div>
</blockquote>
<br>
</span>Nop.It is not recommended, even though gluster will try to find the file, it may break.<span><br>
<br>
<br>
<blockquote type="cite">
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
<p>Why re balance not distributing data evenly on all bricks ? <br>
</p>
</div>
</blockquote>
<br>
</span>Rebalance works based on layout, so we need to see how layouts are distributed. If one of your bricks has higher capacity, it will have larger layout.<br>
<br>
<blockquote type="cite">
<div>
<div class="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895h5">
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
<p><span style="font-family:arial,sans-serif;font-size:small;color:rgb(34,34,34)"></span></p>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
<div><br>
</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<blockquote type="cite">
<div>
<div class="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895h5">
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
<p><span style="font-family:arial,sans-serif;font-size:small;color:rgb(34,34,34)">That is correct. As Rafi said, the layout matters here. Can you please send across all the rebalance logs from all the 6 nodes?</span><br>
</p>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
<div><br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<blockquote type="cite">
<div>
<div class="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895h5">
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif" dir="ltr">
<p></p>
<p>Nodes runing CentOS 7.3</p>
<p>Gluster 3.8.11</p>
<p><br>
</p>
<p>Volume info;</p>
<div>Volume Name: ctvvols</div>
<div>Type: Distribute</div>
<div>Volume ID: 1ecea912-510f-4079-b437-7398e9<wbr>caa0eb</div>
<div>Status: Started</div>
<div>Snapshot Count: 0</div>
<div>Number of Bricks: 6</div>
<div>Transport-type: tcp</div>
<div>Bricks:</div>
<div>Brick1: ctv01:/vols/ctvvols</div>
<div>Brick2: ctv02:/vols/ctvvols</div>
<div>Brick3: ctv03:/vols/ctvvols</div>
<div>Brick4: ctv04:/vols/ctvvols</div>
<div>Brick5: ctv05:/vols/ctvvols</div>
<div>Brick6: ctv06:/vols/ctvvols</div>
<div>Options Reconfigured:</div>
<div>nfs.disable: on</div>
<div>performance.readdir-ahead: on</div>
<div>transport.address-family: inet</div>
<div>performance.quick-read: off</div>
<div>performance.read-ahead: off</div>
<div>performance.io-cache: off</div>
<div>performance.stat-prefetch: off</div>
<div>performance.low-prio-threads: 32</div>
<div>network.remote-dio: enable</div>
<div>cluster.eager-lock: enable</div>
<div>cluster.quorum-type: none</div>
<div>cluster.server-quorum-type: server</div>
<div>cluster.data-self-heal-algorit<wbr>hm: full</div>
<div>cluster.locking-scheme: granular</div>
<div>cluster.shd-max-threads: 8</div>
<div>cluster.shd-wait-qlength: 10000</div>
<div>features.shard: off</div>
<div>user.cifs: off</div>
<div>network.ping-timeout: 10</div>
<div>storage.owner-uid: 36</div>
<div>storage.owner-gid: 36</div>
<div><br>
</div>
<div><br>
</div>
<p>re balance log:</p>
<p><br>
</p>
<div>[2017-05-23 14:45:12.637671] I [dht-rebalance.c:2866:gf_defra<wbr>g_process_dir] 0-ctvvols-dht: Migration operation on dir /31e0b341-4eeb-4b71-b280-840eb<wbr>a7d6940/images/690c728d-a83e-4<wbr>c79-ac7d-1f3f17edf7f0 took 0.00 secs</div>
<div>[2017-05-23 14:45:12.640043] I [MSGID: 109081] [dht-common.c:4202:dht_setxatt<wbr>r] 0-ctvvols-dht: fixing the layout of /31e0b341-4eeb-4b71-b280-840eb<wbr>a7d6940/images/091402ba-dc90-4<wbr>206-848a-d73e85a1cc35</div>
<div>[2017-05-23 14:45:12.641516] I [dht-rebalance.c:2652:gf_defra<wbr>g_process_dir] 0-ctvvols-dht: migrate data called on /31e0b341-4eeb-4b71-b280-840eb<wbr>a7d6940/images/091402ba-dc90-4<wbr>206-848a-d73e85a1cc35</div>
<div>[2017-05-23 14:45:12.642421] I [dht-rebalance.c:2866:gf_defra<wbr>g_process_dir] 0-ctvvols-dht: Migration operation on dir /31e0b341-4eeb-4b71-b280-840eb<wbr>a7d6940/images/091402ba-dc90-4<wbr>206-848a-d73e85a1cc35 took 0.00 secs</div>
<div>[2017-05-23 14:45:12.645610] I [MSGID: 109081] [dht-common.c:4202:dht_setxatt<wbr>r] 0-ctvvols-dht: fixing the layout of /31e0b341-4eeb-4b71-b280-840eb<wbr>a7d6940/images/be1e2276-d38f-4<wbr>d90-abf5-de757dd04078</div>
<div>[2017-05-23 14:45:12.647034] I [dht-rebalance.c:2652:gf_defra<wbr>g_process_dir] 0-ctvvols-dht: migrate data called on /31e0b341-4eeb-4b71-b280-840eb<wbr>a7d6940/images/be1e2276-d38f-4<wbr>d90-abf5-de757dd04078</div>
<div>[2017-05-23 14:45:12.647589] I [dht-rebalance.c:2866:gf_defra<wbr>g_process_dir] 0-ctvvols-dht: Migration operation on dir /31e0b341-4eeb-4b71-b280-840eb<wbr>a7d6940/images/be1e2276-d38f-4<wbr>d90-abf5-de757dd04078 took 0.00 secs</div>
<div>[2017-05-23 14:45:12.653291] I [dht-rebalance.c:3838:gf_defra<wbr>g_start_crawl] 0-DHT: crawling file-system completed</div>
<div>[2017-05-23 14:45:12.653323] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 23</div>
<div>[2017-05-23 14:45:12.653508] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 24</div>
<div>[2017-05-23 14:45:12.653536] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 25</div>
<div>[2017-05-23 14:45:12.653556] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 26</div>
<div>[2017-05-23 14:45:12.653580] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 27</div>
<div>[2017-05-23 14:45:12.653603] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 28</div>
<div>[2017-05-23 14:45:12.653623] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 29</div>
<div>[2017-05-23 14:45:12.653638] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 30</div>
<div>[2017-05-23 14:45:12.653659] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 31</div>
<div>[2017-05-23 14:45:12.653677] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 32</div>
<div>[2017-05-23 14:45:12.653692] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 33</div>
<div>[2017-05-23 14:45:12.653711] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 34</div>
<div>[2017-05-23 14:45:12.653723] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 35</div>
<div>[2017-05-23 14:45:12.653739] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 36</div>
<div>[2017-05-23 14:45:12.653759] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 37</div>
<div>[2017-05-23 14:45:12.653772] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 38</div>
<div>[2017-05-23 14:45:12.653789] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 39</div>
<div>[2017-05-23 14:45:12.653800] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 40</div>
<div>[2017-05-23 14:45:12.653811] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 41</div>
<div>[2017-05-23 14:45:12.653822] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 42</div>
<div>[2017-05-23 14:45:12.653836] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 43</div>
<div>[2017-05-23 14:45:12.653870] I [dht-rebalance.c:2246:gf_defra<wbr>g_task] 0-DHT: Thread wokeup. defrag->current_thread_count: 44</div>
<div>[2017-05-23 14:45:12.654413] I [MSGID: 109028] [dht-rebalance.c:4079:gf_defra<wbr>g_status_get] 0-ctvvols-dht: Rebalance is completed. Time taken is 0.00 secs</div>
<div>[2017-05-23 14:45:12.654428] I [MSGID: 109028] [dht-rebalance.c:4083:gf_defra<wbr>g_status_get] 0-ctvvols-dht: Files migrated: 0, size: 0, lookups: 15, failures: 0, skipped: 0</div>
<div>[2017-05-23 14:45:12.654552] W [glusterfsd.c:1327:cleanup_and<wbr>_exit] (-->/lib64/libpthread.so.0(+0x<wbr>7dc5) [0x7ff40ff88dc5] -->/usr/sbin/glusterfs(gluster<wbr>fs_sigwaiter+0xe5) [0x7ff41161acd5] -->/usr/sbin/glusterfs(cleanup<wbr>_and_exit+0x6b)
[0x7ff41161ab4b] ) 0-: received signum (15), shutting down</div>
<div><br>
</div>
<br>
<p><br>
</p>
<p>Appreciate your help</p>
<p><br>
</p>
<div id="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447Signature">
<br>
<div class="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447ecxmoz-signature">
-- <br>
<br>
<font color="#3366ff"><font color="#000000">Respectfully<b><br>
</b><b>Mahdi A. Mahdi</b></font></font><font color="#3366ff"><br>
<br>
</font></div>
</div>
</div>
<br>
<fieldset class="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447mimeAttachmentHeader">
</fieldset> <br>
</div>
</div>
<pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a class="m_-6727024028091360839m_1205392110087603124m_-2055877162185570895m_-5833328665865246447moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</div>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
<br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</body>
</html>