<div dir="ltr"><div>Thank you for the reply Strahil.</div><div><br></div><div>Unfortunately we did do the rebalance already so the data should be written across all brinks currently.  I&#39;m fine with pulling these newly added bricks out of the volume.  However, is it as simple as pulling them out and the data will rebalance to the disks that are left?</div><div><br></div><div>Thanks,</div><div><br></div><div>HB </div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Oct 19, 2019 at 4:13 PM Strahil Nikolov &lt;<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-size:16px"><div style="font-family:&quot;courier new&quot;,courier,monaco,monospace,sans-serif"></div>
        <div dir="ltr">Most probably this means that data on <div style="font-family:&quot;courier new&quot;,courier,monaco,monospace,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif">Brick server1:/gluster_bricks/data3       49164     0          Y       4625 </span><br style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif">Brick server1:/gluster_bricks/data4       49165     0          Y       4644</span></div><div style="font-family:&quot;courier new&quot;,courier,monaco,monospace,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif"><br></span></div><div dir="ltr" style="font-family:&quot;courier new&quot;,courier,monaco,monospace,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif">is the same and when server1 goes down , you will have no access to the data on this set.</span></div><div dir="ltr" style="font-family:&quot;courier new&quot;,courier,monaco,monospace,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif">Same should be valid for :</span></div><div dir="ltr"><div style="font-family:&quot;courier new&quot;,courier,monaco,monospace,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif">Brick server1:/gluster_bricks/data5       49166     0          Y       5088 </span><br style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif">Brick server1:/gluster_bricks/data6       49167     0          Y       5128 </span><br style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif">Brick server2:/gluster_bricks/data3       49168     0          Y       22314</span><br style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif">Brick server2:/gluster_bricks/data4       49169     0          Y       22345</span><br style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif">Brick server2:/gluster_bricks/data5       49170     0          Y       22889</span><br style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif">Brick server2:/gluster_bricks/data6       49171     0          Y       22932</span></div><div style="font-family:&quot;courier new&quot;,courier,monaco,monospace,sans-serif"><span style="color:rgb(38,40,42);font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif"><br></span></div><div dir="ltr"><font color="#26282a" face="Helvetica Neue, Helvetica, Arial, sans-serif">I would remove those bricks and add them again this type always specifying one brick from server1 and one from server2 , so each server has a copy of your data.Even if you didn&#39;t rebalance yet, there could be some data on those bricks and can take a while till the cluster evacuates the data.</font></div><div dir="ltr"><font color="#26282a" face="Helvetica Neue, Helvetica, Arial, sans-serif"><br></font></div><div dir="ltr"><font color="#26282a" face="Helvetica Neue, Helvetica, Arial, sans-serif">I&#39;m a gluster newbie, so don&#39;t take anything I say for granted :</font></div><div dir="ltr"><font color="#26282a" face="Helvetica Neue, Helvetica, Arial, sans-serif">Best Regards,</font></div><div dir="ltr"><font color="#26282a" face="Helvetica Neue, Helvetica, Arial, sans-serif">Strahil Nikolov</font></div><div dir="ltr"><font color="#26282a" face="Helvetica Neue, Helvetica, Arial, sans-serif"><br></font></div><div dir="ltr"><font color="#26282a" face="Helvetica Neue, Helvetica, Arial, sans-serif"><br></font></div></div></div><div style="font-family:&quot;courier new&quot;,courier,monaco,monospace,sans-serif"><br></div>
        
        </div><div id="gmail-m_-345399201128035554ydp3922686dyahoo_quoted_1984509654">
            <div style="font-family:&quot;Helvetica Neue&quot;,Helvetica,Arial,sans-serif;font-size:13px;color:rgb(38,40,42)">
                
                <div>
                    В събота, 19 октомври 2019 г., 01:40:58 ч. Гринуич+3, Herb Burnswell &lt;<a href="mailto:herbert.burnswell@gmail.com" target="_blank">herbert.burnswell@gmail.com</a>&gt; написа:
                </div>
                <div><br></div>
                <div><br></div>
                <div><div id="gmail-m_-345399201128035554ydp3922686dyiv5504754198"><div dir="ltr">All,<div><br></div><div>We recently added 4 new bricks to an establish distributed replicated volume.  The original volume was created via gdeploy as:</div><div><br></div><div>[volume1]<br>action=create<br>volname=tank<br>replica_count=2<br>force=yes<br>key=performance.parallel-readdir,network.inode-lru-limit,performance.md-cache-timeout,performance.cache-invalidation,performance.stat-prefetch,features.cache-invalidation-timeout,features.cache-invalidation,performance.cache-samba-metadata<br>value=on,500000,600,on,on,600,on,on<br>brick_dirs=/gluster_bricks/data1,/gluster_bricks/data2<br>ignore_errors=no<br></div><div><br></div><div>This created the volume as:</div><div><br></div><div># gluster vol status tank<br>Status of volume: tank<br>Gluster process                             TCP Port  RDMA Port  Online  Pid<br>------------------------------------------------------------------------------<br>Brick server1:/gluster_bricks/data1       49162     0          Y       20318<br>Brick server2:/gluster_bricks/data1       49166     0          Y       3432 <br>Brick server1:/gluster_bricks/data2       49163     0          Y       20323<br>Brick server2:/gluster_bricks/data2       49167     0          Y       3435<br></div><div>Self-heal Daemon on localhost               N/A       N/A        Y       25874<br>Self-heal Daemon on server2               N/A       N/A        Y       12536<br> <br>Task Status of Volume tank<br>------------------------------------------------------------------------------<br></div><div>There are no active volume tasks<br></div><div><br></div><div>I have read (<a href="https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes" rel="nofollow" target="_blank">https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes</a>) that the way one creates distributed replicated volumes is sensitive to replica-sets:</div><div><br></div><div><strong style="font-family:Roboto,Helvetica,Arial,sans-serif;font-size:17.6px">Note</strong><span style="font-family:Roboto,Helvetica,Arial,sans-serif;font-size:17.6px">: The number of bricks should be a multiple of the replica count for a distributed replicated volume. Also, the order in which bricks are specified has a great effect on data protection. Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a volume-wide distribute set. To make sure that replica-set members are not placed on the same node, list the first brick on every server, then the second brick on every server in the same order, and so on.</span><br></div><div><br></div><div>I just noticed that the way we added the new bricks we did not indicate a replica:<br></div><div><br></div><div>gluster volume add-brick tank server1:/gluster_bricks/data3 server1:/gluster_bricks/data4 force<br>gluster volume add-brick tank server1:/gluster_bricks/data5 server1:/gluster_bricks/data6 force<br>gluster volume add-brick tank server2:/gluster_bricks/data3 server2:/gluster_bricks/data4 force<br>gluster volume add-brick tank server2:/gluster_bricks/data5 server2:/gluster_bricks/data6 force<br></div><div><br></div><div>Which modified the volume as:</div><div><br></div><div># gluster vol status tank<br>Status of volume: tank<br>Gluster process                             TCP Port  RDMA Port  Online  Pid<br>------------------------------------------------------------------------------<br>Brick server1:/gluster_bricks/data1       49162     0          Y       20318<br>Brick server2:/gluster_bricks/data1       49166     0          Y       3432 <br>Brick server1:/gluster_bricks/data2       49163     0          Y       20323<br>Brick server2:/gluster_bricks/data2       49167     0          Y       3435 <br>Brick server1:/gluster_bricks/data3       49164     0          Y       4625 <br>Brick server1:/gluster_bricks/data4       49165     0          Y       4644 <br>Brick server1:/gluster_bricks/data5       49166     0          Y       5088 <br>Brick server1:/gluster_bricks/data6       49167     0          Y       5128 <br>Brick server2:/gluster_bricks/data3       49168     0          Y       22314<br>Brick server2:/gluster_bricks/data4       49169     0          Y       22345<br>Brick server2:/gluster_bricks/data5       49170     0          Y       22889<br>Brick server2:/gluster_bricks/data6       49171     0          Y       22932<br>Self-heal Daemon on localhost               N/A       N/A        Y       12366<br>Self-heal Daemon on server2               N/A       N/A        Y       21446<br> <br>Task Status of Volume tank<br>------------------------------------------------------------------------------<br>Task                 : Rebalance           <br>ID                   : ec958aee-edbd-4106-b896-97c688fde0e3<br>Status               : completed<br></div><div><br></div><div>As you can see the added 3,4,5,6 bricks appear differently:</div><div><br></div><div>Brick server1:/gluster_bricks/data3       49164     0          Y       4625 <br>Brick server1:/gluster_bricks/data4       49165     0          Y       4644 <br>Brick server1:/gluster_bricks/data5       49166     0          Y       5088 <br>Brick server1:/gluster_bricks/data6       49167     0          Y       5128 <br>Brick server2:/gluster_bricks/data3       49168     0          Y       22314<br>Brick server2:/gluster_bricks/data4       49169     0          Y       22345<br>Brick server2:/gluster_bricks/data5       49170     0          Y       22889<br>Brick server2:/gluster_bricks/data6       49171     0          Y       22932 </div><div><br></div><div>My question is what does this mean for the volume?  Everything appears to be running as expected, but:</div><div><br></div><div>- Is there a serious problem with the way the volume is now configured?</div><div>- Have we messed up the high availability of the 2 nodes?</div><div>- Is there a way to reconfigure the volume to get it to a more optimal state?</div><div><br></div><div>Any help is greatly appreciated...</div><div><br></div><div>Thanks in advance,</div><div><br></div><div>HB</div></div></div>________<br><br>Community Meeting Calendar:<br><br>APAC Schedule -<br>Every 2nd and 4th Tuesday at 11:30 AM IST<br>Bridge: <a href="https://bluejeans.com/118564314" rel="nofollow" target="_blank">https://bluejeans.com/118564314</a><br><br>NA/EMEA Schedule -<br>Every 1st and 3rd Tuesday at 01:00 PM EDT<br>Bridge: <a href="https://bluejeans.com/118564314" rel="nofollow" target="_blank">https://bluejeans.com/118564314</a><br><br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" rel="nofollow" target="_blank">Gluster-users@gluster.org</a><br><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="nofollow" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br></div>
            </div>
        </div></div></blockquote></div></div>