[Gluster-users] Even distribuite the data amount many directories!
Gilberto Ferreira
gilberto.nunes32 at gmail.com
Mon Jun 9 16:52:21 UTC 2025
>> When you rebalance vm-disk I honestly don't know what is going to
happen, because I doubt they will be writeable during transit.
You mean that the VM disk could become read only? And crash the system
inside the VM, I guess!
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530 - Whatsapp / Telegram
Em seg., 9 de jun. de 2025 às 13:23, Andreas Schwibbe <a.schwibbe at gmx.net>
escreveu:
> When adding bricks the new layout is populated, but only for new files. No
> automatic rebalance (of old files).
> When you rebalance vm-disk I honestly don't know what is going to happen,
> because I doubt they will be writeable during transit.
>
> 09.06.2025 16:57:05 Gilberto Ferreira <gilberto.nunes32 at gmail.com>:
>
> Hi Andreas
> Thanks for your reply.
> Initially I have had 3 directories in both servers, like:
> server1|server2
> /data1
> /data2
> /data3
>
> Sometime after that, like 6 months after create the glusterfs volumes, I
> added more 3 disks in to both servers.
> Now that's how it is:
> server1|server2
> /data1
> /data2
> /data3
> /data4
> /data5
> /date6
>
> I never perform any rebalance or fix layout, because never needed.
> Now that 2 directories are nearly full, this situation comes up!
>
> Is there any impact, if I took too long to perform a rebalance?
> Honestly, I never know if a rebalance is needed when adding new bricks.
> This should be in the gluster add-brick command output, something like
> this: You'll need to rebalance of fix the layout of you volume, after
> add-brick.
> Anyway, thanks so much for your help.
>
> Best regards.
>
>
>
> ---
>
>
> Gilberto Nunes Ferreira
> +55 (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em seg., 9 de jun. de 2025 às 05:07, Andreas Schwibbe <a.schwibbe at gmx.net>
> escreveu:
>
>> Hi Gilberto,
>>
>> maybe you should set:
>> cluster.rebal-throttle: lazy
>>
>> thus the rebalance is not consuming too many resources.
>> This always worked for me, but it does have an impact on responsiveness.
>> You however can stop running rebalance if performance impact on
>> production environment is too much.
>>
>> Fix layout will only fix if you added bricks later to the cluster and
>> wont fix if you store large files in a single dir.
>>
>> Best
>> A.
>>
>> Am Sonntag, dem 08.06.2025 um 22:34 -0300 schrieb Gilberto Ferreira:
>>
>> Hello
>> Anybody can help me?
>> I need to know if the rebalance has any risk?
>> I have 7 SSD KingSton DataCenter DC600M and need to know if I run the
>> fix-layout and rebalance will took a long time or if there is any dangerous
>> to do that 'online'.
>> Thanks
>>
>>
>> ---
>>
>>
>> Gilberto Nunes Ferreira
>> +55 (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em ter., 3 de jun. de 2025 às 10:25, Gilberto Ferreira <
>> gilberto.nunes32 at gmail.com> escreveu:
>>
>> Or
>> gluster vol rebalance VOL start ??
>>
>> ---
>>
>>
>> Gilberto Nunes Ferreira
>> +55 (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em ter., 3 de jun. de 2025 às 10:24, Gilberto Ferreira <
>> gilberto.nunes32 at gmail.com> escreveu:
>>
>> Hi there..
>>
>> I have two servers with the follow disk layout:
>>
>> /dev/sdc1 1,8T 1,6T 178G 91% /data1
>> /dev/sdd1 1,8T 1,6T 175G 91% /data2
>> /dev/sdb1 1,8T 225G 1,6T 13% /data3
>> /dev/fuse 128M 44K 128M 1% /etc/pve
>> gluster1:stg-vms 8,8T 3,0T 5,8T 34% /stg/vms
>> /dev/sdf1 1,8T 108G 1,7T 7% /data4
>> /dev/sdg1 1,8T 108G 1,7T 7% /data5
>> /dev/sdh1 1,8T 105G 1,7T 6% /data6
>>
>> As you can see, the /data1 and /data2 are almost full.
>> Question is:
>> Can I run gluster vol rebalance VOL fix-layout to fix this?
>>
>> Best regards.
>>
>>
>> ---
>>
>>
>> Gilberto Nunes Ferreira
>> +55 (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20250609/79059ccc/attachment.html>
More information about the Gluster-users
mailing list