[Gluster-users] Bricks filling up
Joe Julian
joe at julianfamily.org
Tue Apr 16 22:00:26 UTC 2013
You found a bug. Patched it. And never reported, nor contributed that
patch??? Grrr... :P
On 04/16/2013 02:58 PM, Ling Ho wrote:
> No this is our in-house patch. A similar fix is still not in 3.3.2qa
> but it's in 3.4.0alpha. I couldn't find any bug reported, if any.
> ...
> ling
>
> On 04/16/2013 02:15 PM, Thomas Wakefield wrote:
>> Do you have the bug # for this patch?
>>
>> On Apr 16, 2013, at 3:48 PM, Ling Ho <ling at slac.stanford.edu> wrote:
>>
>>> Maybe I was wrong. I just did a diff and looks like the fix is not
>>> in 3.3.1. This is the patch I applied to my 3.3.0 build. I didn't
>>> fix the the check for inodes though. If you look at the code, max is
>>> defined as 0.
>>>
>>> --- glusterfs-3.3.0.orig/xlators/cluster/dht/src/dht-diskusage.c
>>> 2012-05-30 10:53:24.000000000 -0700
>>> +++ glusterfs-3.3.0-slac/xlators/cluster/dht/src/dht-diskusage.c
>>> 2013-03-20 02:25:53.761415662 -0700
>>> @@ -263,14 +263,14 @@
>>> {
>>> for (i = 0; i < conf->subvolume_cnt; i++) {
>>> if (conf->disk_unit == 'p') {
>>> - if ((conf->du_stats[i].avail_percent
>>> > max)
>>> + if ((conf->du_stats[i].avail_percent
>>> > conf->min_free_disk)
>>> &&
>>> (conf->du_stats[i].avail_inodes > max_inodes)) {
>>> max =
>>> conf->du_stats[i].avail_percent;
>>> max_inodes =
>>> conf->du_stats[i].avail_inodes;
>>> avail_subvol =
>>> conf->subvolumes[i];
>>> }
>>> } else {
>>> - if ((conf->du_stats[i].avail_space >
>>> max)
>>> + if ((conf->du_stats[i].avail_space >
>>> conf->min_free_disk)
>>> &&
>>> (conf->du_stats[i].avail_inodes > max_inodes)) {
>>> max =
>>> conf->du_stats[i].avail_space;
>>> max_inodes =
>>> conf->du_stats[i].avail_inodes;
>>>
>>>
>>> ...
>>> ling
>>>
>>>
>>> On 04/16/2013 12:38 PM, Thomas Wakefield wrote:
>>>> Running 3.3.1 on everything, client and servers :(
>>>>
>>>> Thomas Wakefield
>>>> Sr Sys Admin @ COLA
>>>> 301-902-1268
>>>>
>>>>
>>>>
>>>> On Apr 16, 2013, at 3:23 PM, Ling Ho <ling at slac.stanford.edu> wrote:
>>>>
>>>>> On 04/15/2013 06:35 PM, Thomas Wakefield wrote:
>>>>>> Help-
>>>>>>
>>>>>> I have multiple gluster filesystems, all with the setting:
>>>>>> cluster.min-free-disk: 500GB. My understanding is that this
>>>>>> setting should stop new writes to a brick with less than 500GB of
>>>>>> free space. But that existing files might expand, which is why I
>>>>>> went with a high number like 500GB. But I am still getting full
>>>>>> bricks, frequently it's the first brick in the cluster that
>>>>>> suddenly fills up.
>>>>>>
>>>>>> Can someone tell me how gluster chooses where to write a file.
>>>>>> And why the min-free-disk is being ignored.
>>>>>>
>>>>>> Running 3.3.1 currently on all servers.
>>>>>>
>>>>>> Thanks,
>>>>>> -Tom
>>>>>> _______________________________________________
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users at gluster.org
>>>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>>> Make sure you are running 3.3.1 also on all the clients also. It
>>>>> is determined by the clients. I noticed there is a fix there is in
>>>>> 3.3.1 which is not in 3.3.0. In 3.3.0, it will try writing to the
>>>>> next brick which is the 1st brick, but only check if it is not
>>>>> 100% (completely) free. If it has 1 byte left, it will start
>>>>> writing to it, and that's why the 1st brick will get filled up.
>>>>>
>>>>> ...
>>>>> ling
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list