[Gluster-users] cluster.min-free-disk is not working in distributed disperse volume

Mohamed Pakkeer mdfakkeer at gmail.com
Tue Aug 25 04:10:01 UTC 2015


Hi Sasant,
 We have created the disperse volume across nodes. We stopped all the
upload operations and started the rebalance last night.After overnight
re-balance, some harddisk is occupied 100%  and some disks have 13% disk
space.

disk1 belongs to disperse-set-0  ..... disk36 belongs to disperse-set-35

df -h result of one data node

*/dev/sdb1       3.7T  3.7T  545M 100% /media/disk1*
/dev/sdc1       3.7T  3.2T  496G  *87% /media/disk2*
/dev/sdd1       3.7T  3.7T   30G 100% /media/disk3
/dev/sde1       3.7T  3.5T  173G  96% /media/disk4
/dev/sdf1       3.7T  3.2T  458G  88% /media/disk5
/dev/sdg1       3.7T  3.5T  143G  97% /media/disk6
/dev/sdh1       3.7T  3.5T  220G  95% /media/disk7
/dev/sdi1       3.7T  3.3T  415G  89% /media/disk8
/dev/sdj1       3.7T  3.6T   72G  99% /media/disk9
/dev/sdk1       3.7T  3.5T  186G  96% /media/disk10
/dev/sdl1       3.7T  3.6T   65G  99% /media/disk11
/dev/sdm1       3.7T  3.5T  195G  95% /media/disk12
/dev/sdn1       3.7T  3.5T  199G  95% /media/disk13
/dev/sdo1       3.7T  3.6T   78G  98% /media/disk14
/dev/sdp1       3.7T  3.5T  200G  95% /media/disk15
/dev/sdq1       3.7T  3.6T  119G  97% /media/disk16
/dev/sdr1       3.7T  3.5T  206G  95% /media/disk17
/dev/sds1       3.7T  3.5T  193G  95% /media/disk18
/dev/sdt1       3.7T  3.6T  131G  97% /media/disk19
/dev/sdu1       3.7T  3.5T  141G  97% /media/disk20
/dev/sdv1       3.7T  3.5T  243G  94% /media/disk21
/dev/sdw1       3.7T  3.4T  299G  92% /media/disk22
/dev/sdx1       3.7T  3.5T  163G  96% /media/disk23
/dev/sdy1       3.7T  3.5T  168G  96% /media/disk24
/dev/sdz1       3.7T  3.5T  219G  95% /media/disk25
*/dev/sdaa1      3.7T  3.7T   37G 100% /media/disk26*
/dev/sdab1      3.7T  3.5T  172G  96% /media/disk27
/dev/sdac1      3.7T  3.4T  276G  93% /media/disk28
/dev/sdad1      3.7T  3.6T  108G  98% /media/disk29
/dev/sdae1      3.7T  3.3T  399G  90% /media/disk30
/dev/sdaf1      3.7T  3.5T  240G  94% /media/disk31
/dev/sdag1      3.7T  3.6T  122G  97% /media/disk32
/dev/sdah1      3.7T  3.5T  147G  97% /media/disk33
/dev/sdai1      3.7T  3.4T  342G  91% /media/disk34
/dev/sdaj1      3.7T  3.4T  288G  93% /media/disk35
/dev/sdak1      3.7T  3.4T  342G  91% /media/disk36

*disk1 belongs to disperse-set-0.* Rebalancer logs shows, still rebalancer
is trying to fill the disperse-set-0 after filling to 100%

[2015-08-24 19:52:53.036622] E [MSGID: 109023]
[dht-rebalance.c:672:__dht_check_free_space] 0-glustertest-dht: data
movement attempted from node (glustertest-disperse-7) to node
*(glustertest-disperse-0)
which does not have required free space* for
(/Packages/Features/MPEG/A/AMEO-N-CHALLANGE_FTR_S_BEN-XX_IN-UA_51_HD_RIC_OV/AMEO-N-CHALLANGE_FTR_S_BEN-XX_IN-UA_51_HD_20110521_RIC_OV/AMI-NEBO-C_R3_AUDIO_190511.mxf)

[2015-08-24 19:52:53.042026] I [dht-rebalance.c:1002:dht_migrate_file]
0-glustertest-dht:
/Packages/Features/MPEG/A/AMEO-N-CHALLANGE_FTR_S_BEN-XX_IN-UA_51_HD_RIC_OV/AMEO-N-CHALLANGE_FTR_S_BEN-XX_IN-UA_51_HD_20110521_RIC_OV/AMINEBO-CHALLANGE_BEN_R1-2-3-4-5-6_MPEG_200511-reel-5-mpeg2.mxf:
attempting to move from glustertest-disperse-13 to glustertest-disperse-0

I think, cluster.weighted-rebalance and cluster.min-free-disk have bugs for
re-balancing the data, based on weight and disk free space.

Thanks
Backer


On Mon, Aug 24, 2015 at 4:28 PM, Mohamed Pakkeer <mdfakkeer at gmail.com>
wrote:

> Hi Susant,
>
>    Thanks for your quick reply. We are not updating any files. Actually we
> are archiving video files on this cluster. I think there is a bug in
> cluster.min-free-disk.
>
> Also i would like to know about rebalance the cluster. Currently we have
> 20 nodes and 10 nodes hard disks are almost full . So we need to rebalance
> the data. If i run the rebalancer, it starts on first node(node1)  and
> starts the migration process. The first node cpu usage is always high
> during rebalance compare with rest of the cluster nodes.To reduce the cpu
> usage of rebalancer  datanode( node1), i peer a new node( without disk) for
> rebalance and start the rebalancer. It starts again the rebalancer on same
> node1. How can we run a rebalancer on a dedicated node?
>
> Also we are facing memory leaks in fixlayout and heal full operations.
>
> Regards
> Backer
>
> On Mon, Aug 24, 2015 at 2:57 PM, Susant Palai <spalai at redhat.com> wrote:
>
>> Hi,
>>   Cluster.min-free-disk controls new file creation on the bricks. If you
>> happen to write to the existing files on the brick and that is leading to
>> brick getting full, then most probably you should run a rebalance.
>>
>> Regards,
>> Susant
>>
>> ----- Original Message -----
>> From: "Mathieu Chateau" <mathieu.chateau at lotp.fr>
>> To: "Mohamed Pakkeer" <mdfakkeer at gmail.com>
>> Cc: "gluster-users" <gluster-users at gluster.org>, "Gluster Devel" <
>> gluster-devel at gluster.org>
>> Sent: Monday, 24 August, 2015 2:47:00 PM
>> Subject: Re: [Gluster-users] cluster.min-free-disk is not working in
>> distributed disperse volume
>>
>>
>>
>>
>> 720 brick! Respect !
>> Le 24 août 2015 09:48, "Mohamed Pakkeer" < mdfakkeer at gmail.com > a écrit
>> :
>>
>>
>>
>> Hi,
>>
>>
>> I have a cluster of 720 bricks, all bricks are 4TB in size. I have change
>> the cluster.min-free-disk default value 10% to 3%. So all the disks should
>> have 3% minimum disk space free. But some cluster disks are getting full
>> now. Is there any additional configuration for keeping some percentage of
>> disk space kept free?
>>
>>
>>
>>
>>
>> Volume Name: glustertest
>> Type: Distributed-Disperse
>> Volume ID: 2b575b5c-df2e-449c-abb9-c56cec27e609
>> Status: Started
>> Number of Bricks: 72 x (8 + 2) = 720
>> Transport-type: tcp
>>
>>
>>
>>
>>
>> Options Reconfigured:
>> features.default-soft-limit: 95%
>> cluster.min-free-disk: 3%
>> performance.readdir-ahead: on
>>
>>
>> df -h of one node
>>
>>
>>
>> /dev/sdb1 3.7T 3.6T 132G 97% /media/disk1
>> /dev/sdc1 3.7T 3.2T 479G 88% /media/disk2
>> /dev/sdd1 3.7T 3.6T 109G 98% /media/disk3
>>
>>
>> Any help will be greatly appreciated.
>>
>>
>>
>>
>> Regards
>> Backer
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150825/394a6d8a/attachment.html>


More information about the Gluster-users mailing list