[Gluster-users] No space left on device (when there is actually lots of free space)
Kali Hernandez
kali at thenetcircle.com
Tue Apr 6 08:46:10 UTC 2010
On 04/06/2010 04:32 PM, Krzysztof Strasburger wrote:
> On Tue, Apr 06, 2010 at 02:51:49PM +0800, Kali Hernandez wrote:
>
>> So basically this means no solution is really good as for glusterfs 3.0?
>>
> As for now, there is (probably) not. IMHO it would be useful to add an
> option to DHT, to use load balancing approach instead of hash function.
> Combined with no-hashed-lookup, this would effectively restore the
> functionality of unify, at a cost of stat'ing each filesystem before file
> creation. I understand that this approach does not scale, but the additional
> cost is acceptable for a small number of subvolumes.
>
I'm not really sure of what would the best option be. However, IMHO too,
this limitation cracks the whole purpose of the glusterfs. What use do I
have for a distributed filesystem which is (eventually) unable to store
a file when it does actually have free space to allocate it? In an
environment where a lot of small files are to be stored mixed along with
some others (not so many) huge ones, this means most probably you will
run into a situation where the cluster report no free space even if it is.
>> Does the Unify translator work properly in 2.0.x?
>>
> Seems to work, I'm using it ;).
>
I have just downgraded back to 2.0 and I'm right now trying the Unify.
However to copy all the data back into the cluster (500+ Gb) over the
net is a real pain and will take a lot of time given the read/write
performance (I have all the data on another glusterfs volume, and
reading to one + copying to the new one result in ~ 2.5 Mb/s effective
speed).
The worst point on using Unify, for me, is the need of the namespace
child. As I can't risk on having a SPOF there, I had to take 2 nodes out
for making the namespace node, thus loosing ~ 40 Gb of effective storage
size. Any better config suggestion is more than welcome :-)
-kali-
More information about the Gluster-users
mailing list