<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On 1 March 2018 at 15:25, Jose V. Carrión <span dir="ltr"><<a href="mailto:jocarbur@gmail.com" target="_blank">jocarbur@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I'm sorry for my last incomplete message.<br><div class="gmail_quote"><div dir="ltr"><span class=""><br><div><span style="font-size:12.8px">Below the output of both volumes:</span></div><div><span style="font-size:12.8px"><br></span><div><div>[root@stor1t ~]# gluster volume rebalance volumedisk1 status</div><div>                  Node Rebalanced-files      size    scanned    failures    skipped        status  run time in h:m:s</div><div>                ---------    -----------  -----------  -----------  -----------  -----------     ------------   --------------</div><div>                localhost      703964   16384.0PB    1475983       0       0       completed    64:37:55</div><div>              stor2data      704610   16384.0PB    1475199       0       0       completed    64:31:30</div><div>              stor3data      703964   16384.0PB    1475983       0       0       completed    64:37:55</div><div>volume rebalance: volumedisk1: success</div><div><br></div><div>[root@stor1 ~]# gluster volume rebalance volumedisk0 status</div><div>                  Node Rebalanced-files      size    scanned    failures    skipped        status  run time in h:m:s</div><div>                ---------    -----------  -----------  -----------  -----------  -----------     ------------   --------------</div><div>                localhost      411919     1.1GB     718044       0       0       completed     2:28:52</div><div>              stor2data      435340   16384.0PB     741287       0       0       completed     2:26:01</div><div>              stor3data      411919     1.1GB     718044       0       0       completed     2:28:52</div><div>volume rebalance: volumedisk0: success</div></div></div><div><br></div><div>And  volumedisk1 rebalance logs finished saying:</div><div><div>[2018-02-13 03:47:48.703311] I [MSGID: 109028] [dht-rebalance.c:5053:gf_defra<wbr>g_status_get] 0-volumedisk1-dht: Rebalance is completed. Time taken is 232675.00 secs</div><div>[2018-02-13 03:47:48.703351] I [MSGID: 109028] [dht-rebalance.c:5057:gf_defra<wbr>g_status_get] 0-volumedisk1-dht: Files migrated: 703964, size: 14046969178073, lookups: 1475983, failures: 0, skipped: 0</div></div><div><br></div><div>Checking my logs the new stor3node and the rebalance task was executed on  2018-02-10 . From this date to now I have been storing new files.</div></span><div>The exact sequence of commands to add the new node was:</div><span class=""><div><br></div><div><pre class="m_-3290303129513016130gmail-m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background:rgb(37,37,37);border:1px solid rgb(26,26,26);border-radius:0px;white-space:pre-wrap;color:rgb(240,240,240)">gluster peer probe stor3data</pre></div><div><pre class="m_-3290303129513016130gmail-m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background:rgb(37,37,37);border:1px solid rgb(26,26,26);border-radius:0px;white-space:pre-wrap;color:rgb(240,240,240)">gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/gluster<wbr>fs/vol0</pre></div></span><div><div class="m_-3290303129513016130gmail-h5"><div class="gmail_extra"><pre class="m_-3290303129513016130gmail-m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background:rgb(37,37,37);border:1px solid rgb(26,26,26);border-radius:0px;white-space:pre-wrap;color:rgb(240,240,240)">gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b2/gluster<wbr>fs/vol0</pre><div class="gmail_quote"><pre class="m_-3290303129513016130gmail-m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background:rgb(37,37,37);border:1px solid rgb(26,26,26);border-radius:0px"><span style="color:rgb(240,240,240);font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;white-space:pre-wrap">gluster volume add-brick volumedisk1 stor3data:</span><font color="#f0f0f0" face="dejavu sans mono, liberation mono, bitstream vera mono, dejavu mono, monospace"><span style="font-size:11.7px;white-space:pre-wrap">/mnt/disk_c/<wbr>glusterfs/vol1</span></font></pre></div><div class="gmail_quote"><pre class="m_-3290303129513016130gmail-m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background:rgb(37,37,37);border:1px solid rgb(26,26,26);border-radius:0px"><span style="color:rgb(240,240,240);font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;white-space:pre-wrap">gluster volume add-brick volumedisk1 stor3data:</span><font color="#f0f0f0" face="dejavu sans mono, liberation mono, bitstream vera mono, dejavu mono, monospace"><span style="font-size:11.7px;white-space:pre-wrap">/mnt/disk_d/<wbr>glusterfs/vol1</span></font></pre></div><div class="gmail_quote"><pre class="m_-3290303129513016130gmail-m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background:rgb(37,37,37);border:1px solid rgb(26,26,26);border-radius:0px;white-space:pre-wrap;color:rgb(240,240,240)">gluster volume rebalance volumedisk0 start force<br></pre></div><div class="gmail_quote"><pre class="m_-3290303129513016130gmail-m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background:rgb(37,37,37);border:1px solid rgb(26,26,26);border-radius:0px;white-space:pre-wrap;color:rgb(240,240,240)">gluster volume rebalance volumedisk1 start force</pre></div><div class="gmail_quote">For some reason , could be unbalanced the assigned range of DHT for stor3data bricks ? Could be minor than stor1data and stor2data ? ,</div><div class="gmail_quote"><br></div><div class="gmail_quote">Any way to verify it ? </div><div class="gmail_quote"><br></div><div class="gmail_quote">Any way to modify/rebalance the DHT range between bricks  in order to unify the DHT range per brick ?.</div><div class="gmail_quote"><br></div><div class="gmail_quote">Thanks a lot,</div><div class="gmail_quote"><br></div><div class="gmail_quote">Greetings.</div><span class="HOEnZb"><font color="#888888"><div class="gmail_quote"><br></div><div class="gmail_quote">Jose V.</div><div><br></div></font></span></div></div></div></div></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">2018-03-01 10:39 GMT+01:00 Jose V. Carrión <span dir="ltr"><<a href="mailto:jocarbur@gmail.com" target="_blank">jocarbur@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span style="font-size:12.8px">Hi Nithya,</span><div><span style="font-size:12.8px">Below the output of both volumes:</span></div><div><span style="font-size:12.8px"><br></span><div><div>[root@stor1t ~]# gluster volume rebalance volumedisk1 status</div><div>                  Node Rebalanced-files      size    scanned    failures    skipped        status  run time in h:m:s</div><div>                ---------    -----------  -----------  -----------  -----------  -----------     ------------   --------------</div><div>                localhost      703964   16384.0PB    1475983       0       0       completed    64:37:55</div><div>              stor2data      704610   16384.0PB    1475199       0       0       completed    64:31:30</div><div>              stor3data      703964   16384.0PB    1475983       0       0       completed    64:37:55</div><div>volume rebalance: volumedisk1: success</div><div><br></div><div>[root@stor1 ~]# gluster volume rebalance volumedisk0 status</div><div>                  Node Rebalanced-files      size    scanned    failures    skipped        status  run time in h:m:s</div><div>                ---------    -----------  -----------  -----------  -----------  -----------     ------------   --------------</div><div>                localhost      411919     1.1GB     718044       0       0       completed     2:28:52</div><div>              stor2data      435340   16384.0PB     741287       0       0       completed     2:26:01</div><div>              stor3data      411919     1.1GB     718044       0       0       completed     2:28:52</div><div>volume rebalance: volumedisk0: success</div></div></div><div><br></div><div>And  volumedisk1 rebalance logs finished saying:</div><div><div>[2018-02-13 03:47:48.703311] I [MSGID: 109028] [dht-rebalance.c:5053:gf_defra<wbr>g_status_get] 0-volumedisk1-dht: Rebalance is completed. Time taken is 232675.00 secs</div><div>[2018-02-13 03:47:48.703351] I [MSGID: 109028] [dht-rebalance.c:5057:gf_defra<wbr>g_status_get] 0-volumedisk1-dht: Files migrated: 703964, size: 14046969178073, lookups: 1475983, failures: 0, skipped: 0</div></div><div><br></div><div>Checking my logs the new stor3node and the rebalance task was executed on  2018-02-10 . From this date to now I have been storing new files.</div><div>The sequence of commands to add the node was:</div><div><br></div><div><pre class="m_-3290303129513016130m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background:rgb(37,37,37);border:1px solid rgb(26,26,26);border-radius:0px;white-space:pre-wrap;color:rgb(240,240,240)">gluster peer probe stor3data</pre></div><div><pre class="m_-3290303129513016130m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background:rgb(37,37,37);border:1px solid rgb(26,26,26);border-radius:0px;white-space:pre-wrap;color:rgb(240,240,240)">gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/gluster<wbr>fs/vol0<br></pre><pre class="m_-3290303129513016130m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background:rgb(37,37,37);border:1px solid rgb(26,26,26);border-radius:0px;white-space:pre-wrap;color:rgb(240,240,240)"><pre class="m_-3290303129513016130m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;border:1px solid rgb(26,26,26);border-radius:0px;white-space:pre-wrap">gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/gluster<wbr>fs/vol0</pre></pre><pre class="m_-3290303129513016130m_1331594654924573053gmail-screen" style="box-sizing:border-box;overflow:auto;font-family:"dejavu sans mono","liberation mono","bitstream vera mono","dejavu mono",monospace;font-size:0.9em;padding:15px;margin-top:1em;margin-bottom:1.8em;line-height:1.42857;word-break:normal;word-wrap:break-word;background:rgb(37,37,37);border:1px solid rgb(26,26,26);border-radius:0px;white-space:pre-wrap;color:rgb(240,240,240)"><br></pre></div><div><div class="m_-3290303129513016130h5"><div><br></div></div></div></div></blockquote></div></div></div></div></blockquote><div><br></div><div>While it is odd that both bricks on the third node show similar usage, I do not see a problem in the steps or the status. Can you keep an eye on this and let us know if this continues to be the case?</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div class="m_-3290303129513016130h5"><div></div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">2018-03-01 6:32 GMT+01:00 Nithya Balachandran <span dir="ltr"><<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Jose,<br><div class="gmail_extra"><br><div class="gmail_quote"><span class="m_-3290303129513016130m_1331594654924573053gmail-">On 28 February 2018 at 22:31, Jose V. Carrión <span dir="ltr"><<a href="mailto:jocarbur@gmail.com" target="_blank">jocarbur@gmail.com</a>></span> wrote:<br></span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Nithya,<div><br></div><div><div class="m_-3290303129513016130m_1331594654924573053gmail-h5"><div>My initial setup was composed of 2 similar nodes: stor1data and stor2data. A month ago I expanded both volumes with a new node: stor3data (2 bricks per volume).</div><div>Of course, then to add the new peer with the bricks I did the 'balance force' operation. This task finished successfully (you can see info below) and number of files on the 3 nodes were very similar .</div><div><br></div><div>For volumedisk1 I only have files of 500MB and they are continuosly written in sequential mode. The filename pattern of written files is:</div><div><br></div><div>run.node1.0000.rd </div><div>run.node2.0000.rd  </div><div><div>run.node1.0001.rd </div><div>run.node2.0001.rd  </div></div><div><div>run.node1.0002.rd </div><div>run.node2.0002.rd  </div></div><div>...........</div><div>...........</div><div><div>run.node1.X.rd </div><div>run.node2.X.rd  </div></div><div><br></div><div>(  X ranging from 0000 to infinite )</div><div><br></div><div>Curiously stor1data and stor2data maintain similar ratios in bytes:</div><div><br></div><div>Filesystem        1K-blocks     Used        Available   Use% Mounted on<br></div><div>/dev/sdc1       52737613824 <a href="tel:(707)%20917-4264" value="+17079174264" target="_blank">17079174264</a>  35658439560  33% /mnt/glusterfs/vol1  -> stor1data</div><div>/dev/sdc1       52737613824 17118810848  35618802976  33% /mnt/glusterfs/vol1  ->  stor2data<br></div><div><br></div><div>However the ratio on som3data differs too much (1TB):</div><div>Filesystem      1K-blocks     Used         Available    Use% Mounted on<br></div><div><div>/dev/sdc1       52737613824 15479191748  37258422076  30% /mnt/disk_c/glusterfs/vol1 -> stor3data</div><div>/dev/sdd1       52737613824 15566398604  37171215220  30% /mnt/disk_d/glusterfs/vol1 -> stor3data</div></div><div><br></div><div>Thinking in  inodes:</div><div><div><br></div><div>Filesystem         Inodes    IUsed    IFree      IUse% Mounted on</div><div>/dev/sdc1       5273970048  851053  5273118995   1% /mnt/glusterfs/vol1 ->  stor1data<br></div><div>/dev/sdc1       5273970048  849388  5273120660   1% /mnt/glusterfs/vol1 ->  stor2data<br></div></div><div><div><br></div><div>/dev/sdc1       5273970048  846877  5273123171   1% /mnt/disk_c/glusterfs/vol1 -> stor3data</div><div>/dev/sdd1       5273970048  845250  5273124798   1% /mnt/disk_d/glusterfs/vol1 -> stor3data</div></div><div><br></div><div>851053 (stor1) - 845250 (stor3) = 5803 files of difference !</div></div></div></div></blockquote><div><br></div><div>The inode numbers are a little misleading here - gluster uses some to create its own internal files and directory structures. Based on the average file size, I think this would actually work out to a difference of around 2000 files.</div><span class="m_-3290303129513016130m_1331594654924573053gmail-"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>In adition, correct me if I'm wrong,  stor3data should have 50% of probability to store a new file (even taking into account the algorithm of DHT with filename patterns)</div><div><br></div></div></blockquote></span><div>Theoretically yes , but again, it depends on the filenames and their hash distribution.</div><div><br></div><div>Please send us the output of :</div><div>gluster volume rebalance <volname> status</div><div><br></div><div>for the volume.</div><div><br></div><div>Regards,</div><div>Nithya</div><div><div class="m_-3290303129513016130m_1331594654924573053gmail-h5"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div><div>Thanks,</div><div>Greetings.</div><div><br></div><div>Jose V.</div><div><br></div><div><div>Status of volume: volumedisk0</div><div>Gluster process               TCP Port  RDMA Port  Online  Pid</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Brick stor1data:/mnt/glusterfs/vol0/<wbr>bri</div><div>ck1                     49152   0      Y    13533</div><div>Brick stor2data:/mnt/glusterfs/vol0/<wbr>bri</div><div>ck1                     49152   0      Y    13302</div><div>Brick stor3data:/mnt/disk_b1/gluster<wbr>fs/</div><div>vol0/brick1                 49152   0      Y    17371</div><div>Brick stor3data:/mnt/disk_b2/gluster<wbr>fs/</div><div>vol0/brick1                 49153   0      Y    17391</div><div>NFS Server on localhost           N/A    N/A     N    N/A  </div><div>NFS Server on stor3data         N/A    N/A     N    N/A  </div><div>NFS Server on stor2data         N/A    N/A     N    N/A  </div><div> </div><div>Task Status of Volume volumedisk0</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Task         : Rebalance       </div><div>ID          : 7f5328cb-ed25-4627-9196-fb3e29<wbr>e0e4ca</div><div>Status        : completed       </div><div> </div><div>Status of volume: volumedisk1</div><div>Gluster process               TCP Port  RDMA Port  Online  Pid</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Brick stor1data:/mnt/glusterfs/vol1/<wbr>bri</div><div>ck1                     49153   0      Y    13579</div><div>Brick stor2data:/mnt/glusterfs/vol1/<wbr>bri</div><div>ck1                     49153   0      Y    13344</div><div>Brick stor3data:/mnt/disk_c/glusterf<wbr>s/v</div><div>ol1/brick1                  49154   0      Y    17439</div><div>Brick stor3data:/mnt/disk_d/glusterf<wbr>s/v</div><div>ol1/brick1                  49155   0      Y    17459</div><div>NFS Server on localhost           N/A    N/A     N    N/A  </div><div>NFS Server on stor3data         N/A    N/A     N    N/A  </div><div>NFS Server on stor2data         N/A    N/A     N    N/A  </div><div> </div><div>Task Status of Volume volumedisk1</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Task         : Rebalance       </div><div>ID          : d0048704-beeb-4a6a-ae94-7e7916<wbr>423fd3</div><div>Status        : completed </div></div><div><div class="m_-3290303129513016130m_1331594654924573053gmail-m_5618488662493539760h5"><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">2018-02-28 15:40 GMT+01:00 Nithya Balachandran <span dir="ltr"><<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Jose,<br><div class="gmail_extra"><br><div class="gmail_quote"><span class="m_-3290303129513016130m_1331594654924573053gmail-m_5618488662493539760m_-7779132382128053870gmail-">On 28 February 2018 at 18:28, Jose V. Carrión <span dir="ltr"><<a href="mailto:jocarbur@gmail.com" target="_blank">jocarbur@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Nithya,<div><br></div><div>I applied the workarround for this bug and now df shows the right size:</div><div><span><div><br></div></span></div></div></blockquote></span><div>That is good to hear.</div><div><div class="m_-3290303129513016130m_1331594654924573053gmail-m_5618488662493539760m_-7779132382128053870gmail-h5"><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><span><div></div><div>[root@stor1 ~]# df -h</div><div>Filesystem       Size  Used Avail Use% Mounted on</div></span><div>/dev/sdb1        26T  1,1T  25T  4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1        50T  16T  34T  33% /mnt/glusterfs/vol1</div><div>stor1data:/volumedisk0</div><div>           101T  3,3T  97T  4% /volumedisk0</div><div>stor1data:/volumedisk1</div><div>           197T  61T  136T  31% /volumedisk1</div></div><div><br></div><div><div><br></div><div>[root@stor2 ~]# df -h</div><span><div>Filesystem       Size  Used Avail Use% Mounted on</div></span><div>/dev/sdb1        26T  1,1T  25T  4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1        50T  16T  34T  33% /mnt/glusterfs/vol1</div><div>stor2data:/volumedisk0</div><div>           101T  3,3T  97T  4% /volumedisk0</div><div>stor2data:/volumedisk1</div><div>           197T  61T  136T  31% /volumedisk1</div></div><div><br></div><div><br></div><div><div>[root@stor3 ~]# df -h</div><span><div>Filesystem       Size  Used Avail Use% Mounted on</div></span><div>/dev/sdb1        25T  638G  24T  3% /mnt/disk_b1/glusterfs/vol0<br></div><div>/dev/sdb2        25T  654G  24T  3% /mnt/disk_b2/glusterfs/vol0</div><div>/dev/sdc1        50T  15T  35T  30% /mnt/disk_c/glusterfs/vol1</div><div>/dev/sdd1        50T  15T  35T  30% /mnt/disk_d/glusterfs/vol1</div><div>stor3data:/volumedisk0</div><div>           101T  3,3T  97T  4% /volumedisk0</div><div>stor3data:/volumedisk1</div><div>           197T  61T  136T  31% /volumedisk1</div></div><div><br></div><div><br></div><div>However I'm concerned because, as you can see, the volumedisk0 on stor3data is composed by 2 bricks on thesame disk but on different partitions (/dev/sdb1 and /dev/sdb2).</div><div>After to aplly the workarround, the  shared-brick-count parameter was setted to 1 in all the bricks and all the servers (see below). Could be this an issue ?</div><div><br></div></div></blockquote></div></div><div>No, this is correct. The shared-brick-count will be > 1 only if multiple bricks share the same partition.</div><span class="m_-3290303129513016130m_1331594654924573053gmail-m_5618488662493539760m_-7779132382128053870gmail-"><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div><div>Also, I can check that stor3data is now unbalanced respect stor1data and stor2data. The three nodes have the same size of brick but stor3data bricks have used 1TB less than stor1data and stor2data:</div></div></blockquote><div><br></div><div><br></div></span><div>This does not necessarily indicate a problem. The distribution need not be exactly equal and depends on the filenames. Can you provide more information on the kind of dataset (how many files, sizes etc) on this volume? Did you create the volume with all 4 bricks or add some later?</div><div><br></div><div>Regards,</div><div>Nithya</div><div><div class="m_-3290303129513016130m_1331594654924573053gmail-m_5618488662493539760m_-7779132382128053870gmail-h5"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div><div><div>stor1data:<br></div><div><div>/dev/sdb1        26T  1,1T  25T  4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1        50T  16T  34T  33% /mnt/glusterfs/vol1</div></div><div><br></div><div><div>stor2data bricks:</div></div><div><div>/dev/sdb1        26T  1,1T  25T  4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1        50T  16T  34T  33% /mnt/glusterfs/vol1</div></div><div><br></div><div>stor3data bricks:</div></div><div><div> /dev/sdb1        25T  638G  24T  3% /mnt/disk_b1/glusterfs/vol0<br></div><div> /dev/sdb2        25T  654G  24T  3% /mnt/disk_b2/glusterfs/vol0</div></div><div>  dev/sdc1        50T  15T  35T  30% /mnt/disk_c/glusterfs/vol1<br></div><div>  /dev/sdd1       50T  15T  35T  30% /mnt/disk_d/glusterfs/vol1</div></div><div><span><div><br></div><div><br></div><div>[root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumed<wbr>isk1/*</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3:   option shared-brick-count 1</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3:   option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol:3:   option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:   option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol:3:   option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:   option shared-brick-count 0</div></span></div><div><br></div><div><div>[root@stor2 ~]# grep -n "share" /var/lib/glusterd/vols/volumed<wbr>isk1/*</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3:   option shared-brick-count 0</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3:   option shared-brick-count 1</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol:3:   option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:   option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol:3:   option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:   option shared-brick-count 0</div></span></div><div><br></div><div><div>[root@stor3t ~]# grep -n "share" /var/lib/glusterd/vols/volumed<wbr>isk1/*</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1<br></div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3:   option shared-brick-count 1</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3:   option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol:3:   option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:   option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol:3:   option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:   option shared-brick-count 0</div></span></div><div><br></div><div class="gmail_extra">Thaks for your help,</div><div class="gmail_extra">Greetings.</div><span class="m_-3290303129513016130m_1331594654924573053gmail-m_5618488662493539760m_-7779132382128053870gmail-m_5857708467870841856HOEnZb"><font color="#888888"><div class="gmail_extra"><br></div><div class="gmail_extra">Jose V.</div></font></span><div><div class="m_-3290303129513016130m_1331594654924573053gmail-m_5618488662493539760m_-7779132382128053870gmail-m_5857708467870841856h5"><div class="gmail_extra"><br></div><div class="gmail_extra"><br><div class="gmail_quote">2018-02-28 5:07 GMT+01:00 Nithya Balachandran <span dir="ltr"><<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Jose,<div><br></div><div>There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this.</div><div><br></div><div>The "<span style="font-size:12.8px">shared-brick-count" values seem fine on </span><span style="font-size:12.8px">stor1. Please send us "</span><span style="font-size:12.8px">grep -n "share" /var/lib/glusterd/vols/</span><span style="font-size:12.8px">volumed<wbr>isk1/*" results </span><span style="font-size:12.8px">for the other nodes so we can check if they are the cause.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Regards,</span></div><div><span style="font-size:12.8px">Nithya</span></div><div><br></div><div><br></div><div><br></div><div>[1] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1517260" target="_blank">https://bugzilla.redhat.co<wbr>m/show_bug.cgi?id=1517260</a></div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_-3290303129513016130m_1331594654924573053gmail-m_5618488662493539760m_-7779132382128053870gmail-m_5857708467870841856m_3174327891829646280h5">On 28 February 2018 at 03:03, Jose V. Carrión <span dir="ltr"><<a href="mailto:jocarbur@gmail.com" target="_blank">jocarbur@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div class="m_-3290303129513016130m_1331594654924573053gmail-m_5618488662493539760m_-7779132382128053870gmail-m_5857708467870841856m_3174327891829646280h5"><div dir="ltr"><p>Hi, <br>
</p>
<p>Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.</p>
<p>I checked that all the volumes status are fine, all the glusterd
daemons are running, there is no error in logs, however df shows a bad
total size.<br>
</p>
<p>My configuration for one volume: volumedisk1</p>
[root@stor1 ~]# gluster volume status volumedisk1Â detail<br>
<p>Status of volume: volumedisk1<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick               : Brick stor1data:/mnt/glusterfs/vol1/<wbr>brick1<br>
TCP Port            : 49153              <br>
RDMA Port           : 0                  <br>
Online              : Y                  <br>
Pid                 : 13579              <br>
File System         : xfs                <br>
Device              : /dev/sdc1          <br>
Mount Options       : rw,noatime         <br>
Inode Size          : 512                <br>
Disk Space Free     : 35.0TB             <br>
Total Disk Space    : 49.1TB             <br>
Inode Count         : 5273970048         <br>
Free Inodes         : 5273123069         <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick               : Brick stor2data:/mnt/glusterfs/vol1/<wbr>brick1<br>
TCP Port            : 49153              <br>
RDMA Port           : 0                  <br>
Online              : Y                  <br>
Pid                 : 13344              <br>
File System         : xfs                <br>
Device              : /dev/sdc1          <br>
Mount Options       : rw,noatime         <br>
Inode Size          : 512                <br>
Disk Space Free     : 35.0TB             <br>
Total Disk Space    : 49.1TB             <br>
Inode Count         : 5273970048         <br>
Free Inodes         : 5273124718         <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick               : Brick stor3data:/mnt/disk_c/glusterf<wbr>s/vol1/brick1<br>
TCP Port            : 49154              <br>
RDMA Port           : 0                  <br>
Online              : Y                  <br>
Pid                 : 17439              <br>
File System         : xfs                <br>
Device              : /dev/sdc1          <br>
Mount Options       : rw,noatime         <br>
Inode Size          : 512                <br>
Disk Space Free     : 35.7TB             <br>
Total Disk Space    : 49.1TB             <br>
Inode Count         : 5273970048         <br>
Free Inodes         : 5273125437         <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick               : Brick stor3data:/mnt/disk_d/glusterf<wbr>s/vol1/brick1<br>
TCP Port            : 49155              <br>
RDMA Port           : 0                  <br>
Online              : Y                  <br>
Pid                 : 17459              <br>
File System         : xfs                <br>
Device              : /dev/sdd1          <br>
Mount Options       : rw,noatime         <br>
Inode Size          : 512                <br>
Disk Space Free     : 35.6TB             <br>
Total Disk Space    : 49.1TB             <br>
Inode Count         : 5273970048         <br>
Free Inodes         : 5273127036         <br>
 </p>
<p>Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB = <b>196,4 TBÂ </b>but df shows:<br>
</p>
<p>[root@stor1 ~]# df -h<br>
Filesystem           Size Used Avail Use% Mounted on<br>
/dev/sda2Â Â Â Â Â Â Â Â Â Â Â Â Â 48GÂ Â 21GÂ Â 25GÂ 46% /<br>
tmpfs                 32G  80K  32G  1% /dev/shm<br>
/dev/sda1Â Â Â Â Â Â Â Â Â Â Â Â 190MÂ Â 62MÂ 119MÂ 35% /boot<br>
/dev/sda4Â Â Â Â Â Â Â Â Â Â Â Â 395GÂ 251GÂ 124GÂ 68% /data<br>
/dev/sdb1Â Â Â Â Â Â Â Â Â Â Â Â Â 26TÂ 601GÂ Â 25TÂ Â 3% /mnt/glusterfs/vol0<br>
/dev/sdc1Â Â Â Â Â Â Â Â Â Â Â Â Â 50TÂ Â 15TÂ Â 36TÂ 29% /mnt/glusterfs/vol1<br>
stor1data:/volumedisk0<br>
                      76T 1,6T  74T  3% /volumedisk0<br>
stor1data:/volumedisk1<br>
                     <b>148T</b>  42T 106T 29% /volumedisk1</p>
<p>Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB<br>
</p>
<p>It's a production system so I hope you can help me.<br>
</p>
<p>Thanks in advance.</p>
<p>Jose V.</p>
<p><br>
</p>
<p>Below some other data of my configuration:<br>
</p>
<p>[root@stor1 ~]# gluster volume info<br>
 <br>
Volume Name: volumedisk0<br>
Type: Distribute<br>
Volume ID: 0ee52d94-1131-4061-bcef-bd8cf8<wbr>98da10<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol0/<wbr>brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol0/<wbr>brick1<br>
Brick3: stor3data:/mnt/disk_b1/gluster<wbr>fs/vol0/brick1<br>
Brick4: stor3data:/mnt/disk_b2/gluster<wbr>fs/vol0/brick1<br>
Options Reconfigured:<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on<br>
 <br>
Volume Name: volumedisk1<br>
Type: Distribute<br>
Volume ID: 591b7098-800e-4954-82a9-6b6d81<wbr>c9e0a2<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol1/<wbr>brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol1/<wbr>brick1<br>
Brick3: stor3data:/mnt/disk_c/glusterf<wbr>s/vol1/brick1<br>
Brick4: stor3data:/mnt/disk_d/glusterf<wbr>s/vol1/brick1<br>
Options Reconfigured:<br>
cluster.min-free-inodes: 6%<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on</p>
<p>[root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumed<wbr>isk1/*<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol:3:Â <wbr>Â Â option shared-brick-count 1<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3:Â Â Â option shared-brick-count 1<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol:3:Â <wbr>Â Â option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3:Â Â Â option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol:3:Â Â Â option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:Â Â Â
option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol:3:Â Â Â option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:Â Â Â
option shared-brick-count 0<br>
</p>
<p><br>
</p></div>
<br></div></div>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br></blockquote></div><br></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div></div>