<div dir="ltr"><div dir="ltr"><img class="gmail-ajT" src="https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, 11 Sep 2019 at 09:47, Strahil &lt;<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><p dir="ltr">Hi Nithya,<br></p>
<p dir="ltr">I just reminded about your previous  e-mail  which left me with the impression that old volumes need that.<br>
This is the one 1 mean:</p>
<div align="left"><p dir="ltr">&gt;It looks like this is a replicate volume. If &gt;that is the case then yes, you are &gt;running an old version of Gluster for &gt;which this was the default </p></div></blockquote><div><br></div><div>Hi Strahil,</div><div><br></div><div>I&#39;m providing a little more detail here which I hope will explain things.</div><div>Rebalance was always a volume wide operation - a <font size="1" face="times new roman, serif"><b style="">rebalance start</b></font> operation will start rebalance processes on all nodes of the volume. However, different processes would behave differently. In earlier releases, all nodes would crawl the bricks and update the directory layouts. However, only one node in each replica/disperse set would actually migrate files,so the rebalance status would only show one node doing any &quot;work&quot; (scanning, rebalancing etc). However, this one node will process all the files in its replica sets. Rerunning rebalance on other nodes would make no difference as it will always be the same node that ends up migrating files.</div><div><div>So for instance, for a replicate volume with server1:/brick1, server2:/brick2 and server3:/brick3 in that order, only the rebalance process on server1 would migrate files. In newer releases, all 3 nodes would migrate files.</div><div><br></div></div><div>The rebalance status does not capture the directory operations of fixing layouts which is why it looks like the other nodes are not doing anything.<br></div><div><br></div><div>Hope this helps.<br></div><div><br></div><div>Regards,</div><div>Nithya</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div align="left"><p dir="ltr">behaviour. <br>
</p>
</div><p dir="ltr">&gt;<br>
&gt;<br>
</p>
<div align="left"><p dir="ltr">&gt;Regards,<br>
</p>
</div><p dir="ltr">&gt;<br>
</p>
<div align="left"><p dir="ltr">&gt;Nithya<br>
</p>
</div><p dir="ltr"><br>
Best Regards,<br>
Strahil Nikolov</p>
<div class="gmail-m_4678280208324842274quote">On Sep 9, 2019 06:36, Nithya Balachandran &lt;<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>&gt; wrote:<br type="attribution"><blockquote class="gmail-m_4678280208324842274quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail-m_4678280208324842274elided-text"><div dir="ltr">On Sat, 7 Sep 2019 at 00:03, Strahil Nikolov &lt;<a href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>&gt; wrote:<br></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:&quot;courier new&quot;,courier,monaco,monospace,sans-serif;font-size:16px"><div></div>
        <div dir="ltr">As it was mentioned, you might have to run rebalance on the other node - but it is better to wait this node is over.</div><div dir="ltr"><br></div></div></div></blockquote><div><br></div><div>Hi Strahil,</div><div><br></div><div>Rebalance does not need to be run on the other node - the operation is a volume wide one . Only a single node per replica set would migrate files in the version used in this case .</div><div><br></div><div>Regards,</div><div>Nithya</div><div><br></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:&quot;courier new&quot;,courier,monaco,monospace,sans-serif;font-size:16px"><div dir="ltr"></div><div dir="ltr">Best Regards,</div><div dir="ltr">Strahil Nikolov<br></div><div><br></div>
        
        </div><div>
            <div style="font-family:&quot;helvetica neue&quot;,helvetica,arial,sans-serif;font-size:13px;color:rgb(38,40,42)">
                
                <div>
                    В петък, 6 септември 2019 г., 15:29:20 ч. Гринуич+3, Herb Burnswell &lt;<a href="mailto:herbert.burnswell@gmail.com" target="_blank">herbert.burnswell@gmail.com</a>&gt; написа:
                </div>
                <div><br></div>
                <div><br></div>
                <div><div><div><div dir="ltr"><div dir="ltr"><br clear="none"></div><br clear="none"><div><div dir="ltr">On Thu, Sep 5, 2019 at 9:56 PM Nithya Balachandran &lt;<a shape="rect" href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>&gt; wrote:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br clear="none"></div><br clear="none"><div><div dir="ltr">On Thu, 5 Sep 2019 at 02:41, Herb Burnswell &lt;<a shape="rect" href="mailto:herbert.burnswell@gmail.com" target="_blank">herbert.burnswell@gmail.com</a>&gt; wrote:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Thanks for the replies.  The rebalance is running and the brick percentages are not adjusting as expected:<div><br clear="none"></div><div># df -hP |grep data<br clear="none">/dev/mapper/gluster_vg-gluster_lv1_data   60T   49T   11T  83% /gluster_bricks/data1<br clear="none">/dev/mapper/gluster_vg-gluster_lv2_data   60T   49T   11T  83% /gluster_bricks/data2<br clear="none">/dev/mapper/gluster_vg-gluster_lv3_data   60T  4.6T   55T   8% /gluster_bricks/data3<br clear="none">/dev/mapper/gluster_vg-gluster_lv4_data   60T  4.6T   55T   8% /gluster_bricks/data4<br clear="none">/dev/mapper/gluster_vg-gluster_lv5_data   60T  4.6T   55T   8% /gluster_bricks/data5<br clear="none">/dev/mapper/gluster_vg-gluster_lv6_data   60T  4.6T   55T   8% /gluster_bricks/data6<br clear="none"></div><div><br clear="none"></div><div>At the current pace it looks like this will continue to run for another 5-6 days.</div><div><br clear="none"></div><div>I appreciate the guidance..</div><div><br clear="none"></div></div></blockquote><div><br clear="none"></div><div>What is the output of the rebalance status command?</div><div>Can you check if there are any errors in the rebalance logs on the node  on which you see rebalance activity?</div><div>If there are a lot of small files on the volume, the rebalance is expected to take time.</div><div><br clear="none"></div><div>Regards,</div><div>Nithya</div></div></div></blockquote><div><br clear="none"></div><div>My apologies, that was a typo.  I meant to say:</div><div><br clear="none"></div><div>&quot;The rebalance is running and the brick percentages are NOW adjusting as expected&quot;</div><div><br clear="none"></div><div>I did expect the rebalance to take several days.  The rebalance log is not showing any errors.  Status output:</div><div><br clear="none"></div><div># gluster vol rebalance tank status<br clear="none">                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s<br clear="none">                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------<br clear="none">                               localhost          1251320        35.5TB       2079527             0             0          in progress      139:9:46<br clear="none">                               serverB                         0        0Bytes             7             0             0            completed       63:47:55<br clear="none">volume rebalance: tank: success<br clear="none"></div><div><br clear="none"></div><div>Thanks again for the guidance.</div><div><div><br clear="none"></div><div>HB</div><div><br clear="none"></div><div> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div><div><br clear="none"></div></div><br clear="none"><div><div dir="ltr">On Mon, Sep 2, 2019 at 9:08 PM Nithya Balachandran &lt;<a shape="rect" href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>&gt; wrote:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br clear="none"></div><br clear="none"><div><div dir="ltr">On Sat, 31 Aug 2019 at 22:59, Herb Burnswell &lt;<a shape="rect" href="mailto:herbert.burnswell@gmail.com" target="_blank">herbert.burnswell@gmail.com</a>&gt; wrote:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Thank you for the reply.<div><br clear="none"></div><div>I started a rebalance with force on serverA as suggested.  Now I see &#39;activity&#39; on that node:</div><div><br clear="none"></div><div># gluster vol rebalance tank status<br clear="none">                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s<br clear="none">                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------<br clear="none">                               localhost             6143         6.1GB          9542             0             0          in progress        0:4:5<br clear="none">                               serverB                  0        0Bytes             7             0             0          in progress        0:4:5<br clear="none">volume rebalance: tank: success<br clear="none"></div><div><br clear="none"></div><div>But I am not seeing any activity on serverB.  Is this expected?  Does the rebalance need to run on each node even though it says both nodes are &#39;in progress&#39;?</div><div><br clear="none"></div></div></blockquote><div><br clear="none"></div><div>It looks like this is a replicate volume. If that is the case then yes, you are running an old version of Gluster for which this was the default behaviour. </div><div><br clear="none"></div><div>Regards,</div><div>Nithya</div><div><br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div><div>Thanks,</div><div><br clear="none"></div><div>HB</div></div><br clear="none"><div><div dir="ltr">On Sat, Aug 31, 2019 at 4:18 AM Strahil &lt;<a shape="rect" href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>&gt; wrote:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><p dir="ltr">The rebalance status show 0 Bytes.</p>
<p dir="ltr">Maybe you should try with the &#39;gluster volume rebalance &lt;VOLNAME&gt; start force&#39; ?</p>
<p dir="ltr">Best Regards,<br clear="none">
Strahil Nikolov</p>
<p dir="ltr">Source: <a shape="rect" href="https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes" target="_blank"></a><a href="https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes" target="_blank">https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes</a></p>
<div>On Aug 30, 2019 20:04, Herb Burnswell &lt;<a shape="rect" href="mailto:herbert.burnswell@gmail.com" target="_blank">herbert.burnswell@gmail.com</a>&gt; wrote:<br clear="none"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">All,<div><br clear="none"></div><div>RHEL 7.5</div><div>Gluster 3.8.15</div><div>2 Nodes: serverA &amp; serverB</div><div><br clear="none"></div><div>I am not deeply knowledgeable about Gluster and it&#39;s administration but we have a 2 node cluster that&#39;s been running for about a year and a half.  All has worked fine to date.  Our main volume has consisted of two 60TB bricks on each of the cluster nodes.  As we reached capacity on the volume we needed to expand.  So, we&#39;ve added four new 60TB bricks to each of the cluster nodes.  The bricks are now seen, and the total size of the volume is as expected:</div><div><br clear="none"></div><div># gluster vol status tank<br clear="none">Status of volume: tank<br clear="none">Gluster process                             TCP Port  RDMA Port  Online  Pid<br clear="none">------------------------------------------------------------------------------<br clear="none">Brick serverA:/gluster_bricks/data1       49162     0          Y       20318<br clear="none">Brick serverB:/gluster_bricks/data1       49166     0          Y       3432 <br clear="none">Brick serverA:/gluster_bricks/data2       49163     0          Y       20323<br clear="none">Brick serverB:/gluster_bricks/data2       49167     0          Y       3435 <br clear="none">Brick serverA:/gluster_bricks/data3       49164     0          Y       4625 <br clear="none">Brick serverA:/gluster_bricks/data4       49165     0          Y       4644 <br clear="none">Brick serverA:/gluster_bricks/data5       49166     0          Y       5088 <br clear="none">Brick serverA:/gluster_bricks/data6       49167     0          Y       5128 <br clear="none">Brick serverB:/gluster_bricks/data3       49168     0          Y       22314<br clear="none">Brick serverB:/gluster_bricks/data4       49169     0          Y       22345<br clear="none">Brick serverB:/gluster_bricks/data5       49170     0          Y       22889<br clear="none">Brick serverB:/gluster_bricks/data6       49171     0          Y       22932<br clear="none">Self-heal Daemon on localhost             N/A       N/A        Y       22981<br clear="none">Self-heal Daemon on <a shape="rect" href="http://serverA.example.com" target="_blank"></a><a href="http://serverA.example.com" target="_blank">serverA.example.com</a>   N/A       N/A        Y       6202 <br clear="none"></div><div><br clear="none"></div><div>After adding the bricks we ran a rebalance from serverA as:</div><div><br clear="none"></div><div># gluster volume rebalance tank start</div><div><br clear="none"></div><div>The rebalance completed:</div><div><br clear="none"></div><div># gluster volume rebalance tank status<br clear="none">                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s<br clear="none">                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------<br clear="none">                               localhost                0        0Bytes             0             0             0            completed        3:7:10<br clear="none">                             <a shape="rect" href="http://serverA.example.com" target="_blank"></a><a href="http://serverA.example.com" target="_blank">serverA.example.com</a>        0        0Bytes             0             0             0            completed        0:0:0<br clear="none">volume rebalance: tank: success<br clear="none"></div><div><br clear="none"></div><div>However, when I run a df, the two original bricks still show all of the consumed space (this is the same on both nodes):</div><div><br clear="none"></div><div># df -hP<br clear="none">Filesystem                               Size  Used Avail Use% Mounted on<br clear="none">/dev/mapper/vg0-root                     5.0G  625M  4.4G  13% /<br clear="none">devtmpfs                                  32G     0   32G   0% /dev<br clear="none">tmpfs                                     32G     0   32G   0% /dev/shm<br clear="none">tmpfs                                     32G   67M   32G   1% /run<br clear="none">tmpfs                                     32G     0   32G   0% /sys/fs/cgroup<br clear="none">/dev/mapper/vg0-usr                       20G  3.6G   17G  18% /usr<br clear="none">/dev/md126                              1014M  228M  787M  23% /boot<br clear="none">/dev/mapper/vg0-home                     5.0G   37M  5.0G   1% /home<br clear="none">/dev/mapper/vg0-opt                      5.0G   37M  5.0G   1% /opt<br clear="none">/dev/mapper/vg0-tmp                      5.0G   33M  5.0G   1% /tmp<br clear="none">/dev/mapper/vg0-var                       20G  2.6G   18G  13% /var<br clear="none">/dev/mapper/gluster_vg-gluster_lv1_data   60T   59T  1.1T  99% /gluster_bricks/data1<br clear="none">/dev/mapper/gluster_vg-gluster_lv2_data   60T   58T  1.3T  98% /gluster_bricks/data2<br clear="none">/dev/mapper/gluster_vg-gluster_lv3_data   60T  451M   60T   1% /gluster_bricks/data3<br clear="none">/dev/mapper/gluster_vg-gluster_lv4_data   60T  451M   60T   1% /gluster_bricks/data4<br clear="none">/dev/mapper/gluster_vg-gluster_lv5_data   60T  451M   60T   1% /gluster_bricks/data5<br clear="none">/dev/mapper/gluster_vg-gluster_lv6_data   60T  451M   60T   1% /gluster_bricks/data6<br clear="none">localhost:/tank                          355T  116T  239T  33% /mnt/tank<br clear="none"></div><div><br clear="none"></div><div>We were thinking that the used space would be distributed across the now 6 bricks after rebalance.  Is that not what a rebalance does?  Is this expected behavior?</div><div><br clear="none"></div><div>Can anyone provide some guidance as to what the behavior here and if there is anything that we need to do at this point?</div><div><br clear="none"></div><div>Thanks in advance,</div><div><br clear="none"></div><div>HB</div></div>
</blockquote></div></blockquote></div>
_______________________________________________<br clear="none">
Gluster-users mailing list<br clear="none">
<a shape="rect" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br clear="none">
<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank"></a><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div>
</blockquote></div>
_______________________________________________<br clear="none">
Gluster-users mailing list<br clear="none">
<a shape="rect" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br clear="none">
<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank"></a><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div>
</blockquote></div></div></div></div></div><div>_______________________________________________<br clear="none">Gluster-users mailing list<br clear="none"><a shape="rect" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br clear="none"><a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank"></a><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></div></div>
            </div>
        </div></div></blockquote></div></div>
</blockquote></div></blockquote></div></div>