<div dir="ltr">All,<div><br></div><div>RHEL 7.5</div><div>Gluster 3.8.15</div><div>2 Nodes: serverA &amp; serverB</div><div><br></div><div>I am not deeply knowledgeable about Gluster and it&#39;s administration but we have a 2 node cluster that&#39;s been running for about a year and a half.  All has worked fine to date.  Our main volume has consisted of two 60TB bricks on each of the cluster nodes.  As we reached capacity on the volume we needed to expand.  So, we&#39;ve added four new 60TB bricks to each of the cluster nodes.  The bricks are now seen, and the total size of the volume is as expected:</div><div><br></div><div># gluster vol status tank<br>Status of volume: tank<br>Gluster process                             TCP Port  RDMA Port  Online  Pid<br>------------------------------------------------------------------------------<br>Brick serverA:/gluster_bricks/data1       49162     0          Y       20318<br>Brick serverB:/gluster_bricks/data1       49166     0          Y       3432 <br>Brick serverA:/gluster_bricks/data2       49163     0          Y       20323<br>Brick serverB:/gluster_bricks/data2       49167     0          Y       3435 <br>Brick serverA:/gluster_bricks/data3       49164     0          Y       4625 <br>Brick serverA:/gluster_bricks/data4       49165     0          Y       4644 <br>Brick serverA:/gluster_bricks/data5       49166     0          Y       5088 <br>Brick serverA:/gluster_bricks/data6       49167     0          Y       5128 <br>Brick serverB:/gluster_bricks/data3       49168     0          Y       22314<br>Brick serverB:/gluster_bricks/data4       49169     0          Y       22345<br>Brick serverB:/gluster_bricks/data5       49170     0          Y       22889<br>Brick serverB:/gluster_bricks/data6       49171     0          Y       22932<br>Self-heal Daemon on localhost             N/A       N/A        Y       22981<br>Self-heal Daemon on <a href="http://serverA.example.com">serverA.example.com</a>   N/A       N/A        Y       6202 <br></div><div><br></div><div>After adding the bricks we ran a rebalance from serverA as:</div><div><br></div><div># gluster volume rebalance tank start</div><div><br></div><div>The rebalance completed:</div><div><br></div><div># gluster volume rebalance tank status<br>                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s<br>                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------<br>                               localhost                0        0Bytes             0             0             0            completed        3:7:10<br>                             <a href="http://serverA.example.com">serverA.example.com</a>        0        0Bytes             0             0             0            completed        0:0:0<br>volume rebalance: tank: success<br></div><div><br></div><div>However, when I run a df, the two original bricks still show all of the consumed space (this is the same on both nodes):</div><div><br></div><div># df -hP<br>Filesystem                               Size  Used Avail Use% Mounted on<br>/dev/mapper/vg0-root                     5.0G  625M  4.4G  13% /<br>devtmpfs                                  32G     0   32G   0% /dev<br>tmpfs                                     32G     0   32G   0% /dev/shm<br>tmpfs                                     32G   67M   32G   1% /run<br>tmpfs                                     32G     0   32G   0% /sys/fs/cgroup<br>/dev/mapper/vg0-usr                       20G  3.6G   17G  18% /usr<br>/dev/md126                              1014M  228M  787M  23% /boot<br>/dev/mapper/vg0-home                     5.0G   37M  5.0G   1% /home<br>/dev/mapper/vg0-opt                      5.0G   37M  5.0G   1% /opt<br>/dev/mapper/vg0-tmp                      5.0G   33M  5.0G   1% /tmp<br>/dev/mapper/vg0-var                       20G  2.6G   18G  13% /var<br>/dev/mapper/gluster_vg-gluster_lv1_data   60T   59T  1.1T  99% /gluster_bricks/data1<br>/dev/mapper/gluster_vg-gluster_lv2_data   60T   58T  1.3T  98% /gluster_bricks/data2<br>/dev/mapper/gluster_vg-gluster_lv3_data   60T  451M   60T   1% /gluster_bricks/data3<br>/dev/mapper/gluster_vg-gluster_lv4_data   60T  451M   60T   1% /gluster_bricks/data4<br>/dev/mapper/gluster_vg-gluster_lv5_data   60T  451M   60T   1% /gluster_bricks/data5<br>/dev/mapper/gluster_vg-gluster_lv6_data   60T  451M   60T   1% /gluster_bricks/data6<br>localhost:/tank                          355T  116T  239T  33% /mnt/tank<br></div><div><br></div><div>We were thinking that the used space would be distributed across the now 6 bricks after rebalance.  Is that not what a rebalance does?  Is this expected behavior?</div><div><br></div><div>Can anyone provide some guidance as to what the behavior here and if there is anything that we need to do at this point?</div><div><br></div><div>Thanks in advance,</div><div><br></div><div>HB</div></div>