[Gluster-users] What will happen if one file size exceeds, the available node's harddrive capacity?

Dan Bretherton d.a.bretherton at reading.ac.uk
Wed May 4 14:57:14 UTC 2011


Hello Anand-

> If you set a limit of minimum free disk space, then GlusterFS will stop
> scheduling new files to any bricks exceeding this limit.

Please can you explain how to do this in version 3.1.x.  Does the 
min-free-disk server vol file option still exist (from 3.0.x), and if so 
is there a CLI command to set it?

-Dan.

> Message: 3
> Date: Tue, 3 May 2011 14:50:18 +0530
> From: Anand Babu Periasamy<ab at gluster.com>
> Subject: Re: [Gluster-users] What will happen if one file size exceeds
> 	the available node's harddrive capacity?
> To: Yueyu Lin<yueyu.lin at me.com>
> Cc: gluster-users at gluster.org
> Message-ID:<BANLkTin6t+XnmkRsqQ9LQE9UA96+2eRpXg at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi Yueyu,
> Thanks for posting answers yourself. Let me give a little bit of background
> about this.
>
> It will get error message as if disk is full. If you simply copy the file to
> a different name and rename it back, it will get rescheduled to a different
> node.
>
> Most disks are in TBs. It doesn't make sense to optimize at that level.
> Block layer striping is often not scalable and requires complicated backend
> disk structure.
>
> If you set a limit of minimum free disk space, then GlusterFS will stop
> scheduling new files to any bricks exceeding this limit. You can use this
> remaining free space to grow existing files. You can also use volume
> rebalance to physically move files across and balance capacity utilization.
>
> Think of choosing a 128k block size and wasting disk space for 4k files. Not
> even disk filesystems optimize capacity utilization to fill every remaining
> sector. With GlusterFS, it has to cope up with the same problem at a much
> larger scale. Thats where the trade off is.
>
> BTW, It will be great if you could post this question on
> http://community.gluster.org as well. It will  become a part of gluster
> knowledge base.
>
> -AB
>
>
> On Tue, May 3, 2011 at 2:39 PM, Yueyu Lin<yueyu.lin at me.com>  wrote:
>
>> I just made the experiment. The answer is no. Distributed-Replicate mode
>> won't split images for application. Application has to split the huge file
>> manually.
>> On May 3, 2011, at 4:48 PM, Yueyu Lin wrote:
>>
>>> Hi, all
>>>     I have a question about the capacity problem in GlusterFS cluster
>> system.
>>>     Supposedly, we have a cluster configuration like this:
>>>
>>>     Type: Distributed-Replicate
>>>     Number of Bricks: 2 x 1 = 2
>>>     Brick1: 192.168.1.150:/home/export
>>>     Brick2: 192.168.1.151:/home/export
>>>
>>>     If there are only 15 Giga bytes available in these two servers, and I
>> need to copy a file of 20GB to the the mounted directory. Obviously the
>> space is not enough.
>>>     Then I add two bricks of 15GB to the cluster. The structure became to:
>>>
>>>     Type: Distributed-Replicate
>>>     Number of Bricks: 2 x 2 = 4
>>>     Bricks:
>>>     Brick1: 192.168.1.152:/home/export/dfsStore
>>>     Brick2: 192.168.1.153:/home/export/dfsStore
>>>     Brick3: 192.168.1.150:/home/export/dfsStore
>>>     Brick4: 192.168.1.151:/home/export/dfsStore
>>>
>>>     Now I will copy the file again to the mounted directory. In client, it
>> shows it has more than 20GB space available. But what will happen when I
>> copy the huge file to it since every single brick doesn't have enough space
>> to hold it.
>>>     Thanks a lot.
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>
>



More information about the Gluster-users mailing list