<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:"Yu Gothic";
panose-1:2 11 4 0 0 0 0 0 0 0;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:"\@Yu Gothic";
panose-1:2 11 4 0 0 0 0 0 0 0;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
span.E-MailFormatvorlage18
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=DE link=blue vlink=purple style='word-wrap:break-word'><div class=WordSection1><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Hello Shreyansh Shah,<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span lang=EN-US>I’m assuming you configured this as a test system, since there’s no redundancy in this setup? From my own experience I’d say that gluster tries to fill bricks evenly, i.e. the 10TB disk should get 2.5 times more data than the 4TB disk in a perfect world with lots of smallish files. I say “should”, because this really depends on the hashing algorithm that Gluster uses to decide on where to store a file. If you have lots of little files, you’ll get a good distribution across all disks. If you have large files, however, you might end up with several of them put together on the smallest drive. That might happen when you rebalance, too.<br><br>There is an option – feature.sharding - to split large files into smaller parts (“shards”) that are then in turn distributed across the bricks in your gluster. It might help with overfilling on your smaller drives. However, at least until Gluster 7.9 it was severely broken in that delete operations didn’t actually delete all of the shards that were allocated for large files. <br><br>As for rebalance breaking down – yeah, been there, done that. We were in the unenviable position of having to add two more nodes to a 4x2 distribute-replicate gluster of about 60TB with ~ 150M of small files. Rebalancing took 5 weeks, mainly because we had to restart it twice.<o:p></o:p></span></p><p class=MsoNormal>Best regards,<o:p></o:p></p><p class=MsoNormal>i.A. Thomas Bätzler<o:p></o:p></p><p class=MsoNormal>-- <o:p></o:p></p><p class=MsoNormal>BRINGE Informationstechnik GmbH<o:p></o:p></p><p class=MsoNormal>Zur Seeplatte 12<o:p></o:p></p><p class=MsoNormal>D-76228 Karlsruhe<o:p></o:p></p><p class=MsoNormal>Germany<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Fon: +49 721 94246-0<o:p></o:p></p><p class=MsoNormal>Fon: +49 171 5438457<o:p></o:p></p><p class=MsoNormal>Fax: +49 721 94246-66<o:p></o:p></p><p class=MsoNormal>Web: <a href="http://www.bringe.de/"><span style='color:#0563C1'>http://www.bringe.de/</span></a><o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Geschäftsführer: Dipl.-Ing. (FH) Martin Bringe<o:p></o:p></p><p class=MsoNormal>Ust.Id: DE812936645, HRB 108943 Mannheim<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><div style='border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm'><p class=MsoNormal><b>Von:</b> Shreyansh Shah <shreyansh.shah@alpha-grep.com> <br><b>Gesendet:</b> Freitag, 12. November 2021 08:42<br><b>An:</b> Thomas Bätzler <t.baetzler@bringe.com><br><b>Cc:</b> gluster-users <gluster-users@gluster.org><br><b>Betreff:</b> Re: [Gluster-users] Rebalance Issues<o:p></o:p></p></div><p class=MsoNormal><o:p> </o:p></p><div><p class=MsoNormal style='margin-bottom:12.0pt'>Hi Thomas,<br>Thank you for your response. Adding the required info below:<o:p></o:p></p><blockquote style='margin-left:30.0pt;margin-right:0cm'><p class=MsoNormal>Volume Name: data<br>Type: Distribute<br>Volume ID: 75410231-bb25-4f14-bcde-caf18fce1d31<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 35<br>Transport-type: tcp<br>Bricks:<br>Brick1: 10.132.1.12:/data/data<br>Brick2: 10.132.1.12:/data1/data<br>Brick3: 10.132.1.12:/data2/data<br>Brick4: 10.132.1.12:/data3/data<br>Brick5: 10.132.1.13:/data/data<br>Brick6: 10.132.1.13:/data1/data<br>Brick7: 10.132.1.13:/data2/data<br>Brick8: 10.132.1.13:/data3/data<br>Brick9: 10.132.1.14:/data3/data<br>Brick10: 10.132.1.14:/data2/data<br>Brick11: 10.132.1.14:/data1/data<br>Brick12: 10.132.1.14:/data/data<br>Brick13: 10.132.1.15:/data/data<br>Brick14: 10.132.1.15:/data1/data<br>Brick15: 10.132.1.15:/data2/data<br>Brick16: 10.132.1.15:/data3/data<br>Brick17: 10.132.1.16:/data/data<br>Brick18: 10.132.1.16:/data1/data<br>Brick19: 10.132.1.16:/data2/data<br>Brick20: 10.132.1.16:/data3/data<br>Brick21: 10.132.1.17:/data3/data<br>Brick22: 10.132.1.17:/data2/data<br>Brick23: 10.132.1.17:/data1/data<br>Brick24: 10.132.1.17:/data/data<br>Brick25: 10.132.1.18:/data/data<br>Brick26: 10.132.1.18:/data1/data<br>Brick27: 10.132.1.18:/data2/data<br>Brick28: 10.132.1.18:/data3/data<br>Brick29: 10.132.1.19:/data3/data<br>Brick30: 10.132.1.19:/data2/data<br>Brick31: 10.132.1.19:/data1/data<br>Brick32: 10.132.1.19:/data/data<br>Brick33: 10.132.0.19:/data1/data<br>Brick34: 10.132.0.19:/data2/data<br>Brick35: 10.132.0.19:/data/data<br>Options Reconfigured:<br>performance.cache-refresh-timeout: 60<br>performance.cache-size: 8GB<br>transport.address-family: inet<br>nfs.disable: on<br>performance.client-io-threads: on<br>storage.health-check-interval: 60<br>server.keepalive-time: 60<br>client.keepalive-time: 60<br>network.ping-timeout: 90<o:p></o:p></p></blockquote><p class=MsoNormal>server.event-threads: 2<o:p></o:p></p></div><p class=MsoNormal><o:p> </o:p></p><div><div><p class=MsoNormal>On Fri, Nov 12, 2021 at 1:08 PM Thomas Bätzler <<a href="mailto:t.baetzler@bringe.com">t.baetzler@bringe.com</a>> wrote:<o:p></o:p></p></div><blockquote style='border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-right:0cm'><div><div><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span lang=EN-US>Hello Shreyansh Shah,</span><o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span lang=EN-US> </span><o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;margin-bottom:12.0pt'><span lang=EN-US>How is your gluster set up? I think it would be very helpful for our understanding of your setup to see the output of “gluster v info all” annotated with brick sizes.<br><br>Otherwise, how could anybody answer your questions?</span><o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Best regards,<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>i.A. Thomas Bätzler<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>-- <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>BRINGE Informationstechnik GmbH<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Zur Seeplatte 12<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>D-76228 Karlsruhe<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Germany<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Fon: +49 721 94246-0<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Fon: +49 171 5438457<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Fax: +49 721 94246-66<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Web: <a href="http://www.bringe.de/" target="_blank"><span style='color:#0563C1'>http://www.bringe.de/</span></a><o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Geschäftsführer: Dipl.-Ing. (FH) Martin Bringe<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Ust.Id: DE812936645, HRB 108943 Mannheim<o:p></o:p></p><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><div style='border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm'><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><b>Von:</b> Gluster-users <<a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@gluster.org</a>> <b>Im Auftrag von </b>Shreyansh Shah<br><b>Gesendet:</b> Freitag, 12. November 2021 07:31<br><b>An:</b> gluster-users <<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a>><br><b>Betreff:</b> [Gluster-users] Rebalance Issues<o:p></o:p></p></div><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p></o:p></p><div><p class=MsoNormal style='mso-margin-top-alt:auto;margin-bottom:12.0pt'>Hi All,<br><br>I have a distributed glusterfs 5.10 setup with 8 nodes and each of them having 1 TB disk and 3 disk of 4TB each (so total 22 TB per node).<br>Recently I added a new node with 3 additional disks (1 x 10TB + 2 x 8TB). Post this I ran rebalance and it does not seem to complete successfully (adding result of gluster volume rebalance data status below). On a few nodes it shows failed and on the node it is showing as completed the rebalance is not even.<o:p></o:p></p><blockquote style='margin-left:30.0pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt'><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>root@gluster6-new:~# gluster v rebalance data status<br> Node Rebalanced-files size scanned failures skipped status run time in h:m:s<br> --------- ----------- ----------- ----------- ----------- ----------- ------------ --------------<br> localhost 22836 2.4TB 136149 1 27664 in progress 14:48:56<br> 10.132.1.15 80 5.0MB 1134 3 121 failed 1:08:33<br> 10.132.1.14 18573 2.5TB 137827 20 31278 in progress 14:48:56<br> 10.132.1.12 607 61.3MB 1667 5 60 failed 1:08:33<br> gluster4.c.storage-186813.internal 26479 2.8TB 148402 14 38271 in progress 14:48:56<br> 10.132.1.18 86 6.4MB 1094 5 70 failed 1:08:33<br> 10.132.1.17 21953 2.6TB 131573 4 26818 in progress 14:48:56<br> 10.132.1.16 56 45.0MB 1203 5 111 failed 1:08:33<br> 10.132.0.19 3108 1.9TB 224707 2 160148 completed 13:56:31<br>Estimated time left for rebalance to complete : 22:04:28<o:p></o:p></p></blockquote><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><br>Adding 'df -h' output for the node that has been marked as completed in the above status command, the data does not seem to be evenly balanced.<o:p></o:p></p><blockquote style='margin-left:30.0pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt'><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>root@gluster-9:~$ df -h /data*<br>Filesystem Size Used Avail Use% Mounted on<br>/dev/bcache0 10T 8.9T 1.1T 90% /data<br>/dev/bcache1 8.0T 5.0T 3.0T 63% /data1<br>/dev/bcache2 8.0T 5.0T 3.0T 63% /data2<o:p></o:p></p></blockquote><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><br clear=all><o:p></o:p></p><div><p class=MsoNormal style='mso-margin-top-alt:auto;margin-bottom:12.0pt'><br>I would appreciate any help to identify the issues here:<o:p></o:p></p></div><div><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>1. Failures during rebalance.<br>2. Im-balance in data size post gluster rebalance command.<o:p></o:p></p></div><div><p class=MsoNormal style='mso-margin-top-alt:auto;margin-bottom:12.0pt'>3. Another thing I would like to mention is that we had to re-balance twice as in the initial run one of the new disks on the new node (10 TB), got 100% full. Any thoughts as to why this could happen during rebalance? The disks on the new node were completely blank disks before rebalance.<br>4. Does glusterfs rebalance data based on percentage used or absolute free disk space available?<br><br>I can share more details/logs if required. Thanks.<o:p></o:p></p></div><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>-- <o:p></o:p></p><div><div><div><div><div><div><div><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-family:"Arial",sans-serif'>Regards,<br>Shreyansh Shah</span><o:p></o:p></p><div><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><b><span style='color:#0B5394'>Alpha</span><span style='color:#666666'>Grep</span><span style='color:black'> Securities Pvt. Ltd.</span></b><o:p></o:p></p></div></div></div></div></div></div></div></div></div></div></div></blockquote></div><p class=MsoNormal><br clear=all><o:p></o:p></p><div><p class=MsoNormal><o:p> </o:p></p></div><p class=MsoNormal>-- <o:p></o:p></p><div><div><div><div><div><div><div><p class=MsoNormal><span style='font-family:"Arial",sans-serif'>Regards,<br>Shreyansh Shah</span><o:p></o:p></p><div><p class=MsoNormal><b><span style='color:#0B5394'>Alpha</span><span style='color:#666666'>Grep</span><span style='color:black'> Securities Pvt. Ltd.</span></b><o:p></o:p></p></div></div></div></div></div></div></div></div></div></body></html>