<html><head></head><body><div class="ydpfa37b52byahoo-style-wrap" style="font-family:courier new, courier, monaco, monospace, sans-serif;font-size:16px;"><div></div>
<div dir="ltr" data-setdir="false">When you run '<span><span style="font-family: Helvetica Neue, Helvetica, Arial, sans-serif;">gluster vol rebalance tank status' do you still see "in progress" ?</span></span></div><div dir="ltr" data-setdir="false"><span><span style="font-family: Helvetica Neue, Helvetica, Arial, sans-serif;"><br></span></span></div><div dir="ltr" data-setdir="false"><span><span style="font-family: Helvetica Neue, Helvetica, Arial, sans-serif;">As far as I know , you should run this command only once and there is no need to run it on both nodes.</span></span></div><div dir="ltr" data-setdir="false"><span><span style="font-family: Helvetica Neue, Helvetica, Arial, sans-serif;"><br></span></span></div><div dir="ltr" data-setdir="false"><span><span style="font-family: Helvetica Neue, Helvetica, Arial, sans-serif;">Best Regards,</span></span></div><div dir="ltr" data-setdir="false"><span><span style="font-family: Helvetica Neue, Helvetica, Arial, sans-serif;">Strahil Nikolov</span></span></div><div><br></div>
</div><div id="ydp10285554yahoo_quoted_7589484398" class="ydp10285554yahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
В събота, 31 август 2019 г., 20:29:06 ч. Гринуич+3, Herb Burnswell <herbert.burnswell@gmail.com> написа:
</div>
<div><br></div>
<div><br></div>
<div><div id="ydp10285554yiv8253325369"><div><div dir="ltr">Thank you for the reply.<div><br clear="none"></div><div>I started a rebalance with force on serverA as suggested. Now I see 'activity' on that node:</div><div><br clear="none"></div><div># gluster vol rebalance tank status<br clear="none"> Node Rebalanced-files size scanned failures skipped status run time in h:m:s<br clear="none"> --------- ----------- ----------- ----------- ----------- ----------- ------------ --------------<br clear="none"> localhost 6143 6.1GB 9542 0 0 in progress 0:4:5<br clear="none"> serverB 0 0Bytes 7 0 0 in progress 0:4:5<br clear="none">volume rebalance: tank: success<br clear="none"></div><div><br clear="none"></div><div>But I am not seeing any activity on serverB. Is this expected? Does the rebalance need to run on each node even though it says both nodes are 'in progress'?</div><div><br clear="none"></div><div>Thanks,</div><div><br clear="none"></div><div>HB</div></div><br clear="none"><div class="ydp10285554yiv8253325369yqt9027495030" id="ydp10285554yiv8253325369yqt30898"><div class="ydp10285554yiv8253325369gmail_quote"><div class="ydp10285554yiv8253325369gmail_attr" dir="ltr">On Sat, Aug 31, 2019 at 4:18 AM Strahil <<a shape="rect" href="mailto:hunter86_bg@yahoo.com" rel="nofollow" target="_blank">hunter86_bg@yahoo.com</a>> wrote:<br clear="none"></div><blockquote class="ydp10285554yiv8253325369gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;"><p dir="ltr">The rebalance status show 0 Bytes.</p>
<p dir="ltr">Maybe you should try with the 'gluster volume rebalance <VOLNAME> start force' ?</p>
<p dir="ltr">Best Regards,<br clear="none">
Strahil Nikolov</p>
<p dir="ltr">Source: <a shape="rect" href="https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes" rel="nofollow" target="_blank">https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes</a></p>
<div class="ydp10285554yiv8253325369gmail-m_6576207768150789355quote">On Aug 30, 2019 20:04, Herb Burnswell <<a shape="rect" href="mailto:herbert.burnswell@gmail.com" rel="nofollow" target="_blank">herbert.burnswell@gmail.com</a>> wrote:<br clear="none"><blockquote class="ydp10285554yiv8253325369gmail-m_6576207768150789355quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;"><div dir="ltr">All,<div><br clear="none"></div><div>RHEL 7.5</div><div>Gluster 3.8.15</div><div>2 Nodes: serverA & serverB</div><div><br clear="none"></div><div>I am not deeply knowledgeable about Gluster and it's administration but we have a 2 node cluster that's been running for about a year and a half. All has worked fine to date. Our main volume has consisted of two 60TB bricks on each of the cluster nodes. As we reached capacity on the volume we needed to expand. So, we've added four new 60TB bricks to each of the cluster nodes. The bricks are now seen, and the total size of the volume is as expected:</div><div><br clear="none"></div><div># gluster vol status tank<br clear="none">Status of volume: tank<br clear="none">Gluster process TCP Port RDMA Port Online Pid<br clear="none">------------------------------------------------------------------------------<br clear="none">Brick serverA:/gluster_bricks/data1 49162 0 Y 20318<br clear="none">Brick serverB:/gluster_bricks/data1 49166 0 Y 3432 <br clear="none">Brick serverA:/gluster_bricks/data2 49163 0 Y 20323<br clear="none">Brick serverB:/gluster_bricks/data2 49167 0 Y 3435 <br clear="none">Brick serverA:/gluster_bricks/data3 49164 0 Y 4625 <br clear="none">Brick serverA:/gluster_bricks/data4 49165 0 Y 4644 <br clear="none">Brick serverA:/gluster_bricks/data5 49166 0 Y 5088 <br clear="none">Brick serverA:/gluster_bricks/data6 49167 0 Y 5128 <br clear="none">Brick serverB:/gluster_bricks/data3 49168 0 Y 22314<br clear="none">Brick serverB:/gluster_bricks/data4 49169 0 Y 22345<br clear="none">Brick serverB:/gluster_bricks/data5 49170 0 Y 22889<br clear="none">Brick serverB:/gluster_bricks/data6 49171 0 Y 22932<br clear="none">Self-heal Daemon on localhost N/A N/A Y 22981<br clear="none">Self-heal Daemon on <a shape="rect" href="http://serverA.example.com" rel="nofollow" target="_blank">serverA.example.com</a> N/A N/A Y 6202 <br clear="none"></div><div><br clear="none"></div><div>After adding the bricks we ran a rebalance from serverA as:</div><div><br clear="none"></div><div># gluster volume rebalance tank start</div><div><br clear="none"></div><div>The rebalance completed:</div><div><br clear="none"></div><div># gluster volume rebalance tank status<br clear="none"> Node Rebalanced-files size scanned failures skipped status run time in h:m:s<br clear="none"> --------- ----------- ----------- ----------- ----------- ----------- ------------ --------------<br clear="none"> localhost 0 0Bytes 0 0 0 completed 3:7:10<br clear="none"> <a shape="rect" href="http://serverA.example.com" rel="nofollow" target="_blank">serverA.example.com</a> 0 0Bytes 0 0 0 completed 0:0:0<br clear="none">volume rebalance: tank: success<br clear="none"></div><div><br clear="none"></div><div>However, when I run a df, the two original bricks still show all of the consumed space (this is the same on both nodes):</div><div><br clear="none"></div><div># df -hP<br clear="none">Filesystem Size Used Avail Use% Mounted on<br clear="none">/dev/mapper/vg0-root 5.0G 625M 4.4G 13% /<br clear="none">devtmpfs 32G 0 32G 0% /dev<br clear="none">tmpfs 32G 0 32G 0% /dev/shm<br clear="none">tmpfs 32G 67M 32G 1% /run<br clear="none">tmpfs 32G 0 32G 0% /sys/fs/cgroup<br clear="none">/dev/mapper/vg0-usr 20G 3.6G 17G 18% /usr<br clear="none">/dev/md126 1014M 228M 787M 23% /boot<br clear="none">/dev/mapper/vg0-home 5.0G 37M 5.0G 1% /home<br clear="none">/dev/mapper/vg0-opt 5.0G 37M 5.0G 1% /opt<br clear="none">/dev/mapper/vg0-tmp 5.0G 33M 5.0G 1% /tmp<br clear="none">/dev/mapper/vg0-var 20G 2.6G 18G 13% /var<br clear="none">/dev/mapper/gluster_vg-gluster_lv1_data 60T 59T 1.1T 99% /gluster_bricks/data1<br clear="none">/dev/mapper/gluster_vg-gluster_lv2_data 60T 58T 1.3T 98% /gluster_bricks/data2<br clear="none">/dev/mapper/gluster_vg-gluster_lv3_data 60T 451M 60T 1% /gluster_bricks/data3<br clear="none">/dev/mapper/gluster_vg-gluster_lv4_data 60T 451M 60T 1% /gluster_bricks/data4<br clear="none">/dev/mapper/gluster_vg-gluster_lv5_data 60T 451M 60T 1% /gluster_bricks/data5<br clear="none">/dev/mapper/gluster_vg-gluster_lv6_data 60T 451M 60T 1% /gluster_bricks/data6<br clear="none">localhost:/tank 355T 116T 239T 33% /mnt/tank<br clear="none"></div><div><br clear="none"></div><div>We were thinking that the used space would be distributed across the now 6 bricks after rebalance. Is that not what a rebalance does? Is this expected behavior?</div><div><br clear="none"></div><div>Can anyone provide some guidance as to what the behavior here and if there is anything that we need to do at this point?</div><div><br clear="none"></div><div>Thanks in advance,</div><div><br clear="none"></div><div>HB</div></div>
</blockquote></div></blockquote></div></div>
</div></div><div class="ydp10285554yqt9027495030" id="ydp10285554yqt35619">_______________________________________________<br clear="none">Gluster-users mailing list<br clear="none"><a shape="rect" href="mailto:Gluster-users@gluster.org" rel="nofollow" target="_blank">Gluster-users@gluster.org</a><br clear="none"><a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="nofollow" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></div></div>
</div>
</div></body></html>