[Gluster-devel] unify: No space left on device

Albert Shih Albert.Shih at obspm.fr
Fri Nov 30 10:46:47 UTC 2007


 Le 29/11/2007 à 15:35:55-0800, Kevan Benson a écrit
> Albert Shih wrote:
>>  Le 29/11/2007 à 12:01:47-0800, Kevan Benson a écrit
>>> Unify doesn't split files.  It just abstracts which server the file is 
>>> stored on and retrieved from, and the aggregation of the files from all 
>>> the 
>> ok.
>>> servers when a listing is done.  What you should be seeing in this case, 
>>> is that the *whole file* is written to a node that does have space (if 
>>> there is a node that has more than 5% space available).
>> I'm not really sure I understand this. When I start the copie the
>> filesystem is full at 99%, you mean gluster must not write on this node ? 
> >
>>> Is you used the striping translator above the unify, you would be storing 
>>> chunks of files (if the files are large enough), and the chunks will be 
>>> scattered across the nodes.  That would allow for more efficient space 
>>> usage.
>> OK. For me it's not a space problem. But I must known how I can configure
>> glusterfs for my HPC. 
> 
> I think I missed the gist of your problem before.  Let me know if this sums 
> it up:
> 
> 1) You are using NUFA to prioritize the local node access
> 2) The local node is close to full, but the other nodes are not

Yes for 1 and 2.

> 3) When writing from that node, a no free disk space error is returned, 
> even through there is plenty of space on other nodes

Well...what I'm try to check is that :

	One node if close to full (for example on node2 there only 5 Go
	free, the other node are 120Go free).

	Other node is totaly free

	On the node2 I'm going to write a 7Go file (one file). And because
	gluster starting write on node2 (where the are just 5Go) and
	gluster don't split the file in two (or more) he going to full
	node2 and returning a error.

Regards.



--
Albert SHIH
Observatoire de Paris Meudon
SIO batiment 15
Heure local/Local time:
Ven 30 nov 2007 11:43:11 CET





More information about the Gluster-devel mailing list