[Gluster-users] "No space left on device" problem

Julien Cornuwel julien at cornuwel.net
Wed Apr 29 19:05:50 UTC 2009


Hi,

Maybe it's just a typo but your nufa volume refers to 'replicate3',
which doesn't seem to exist...

Anyway, I was wondering if it was possible to use nufa over replicated
volumes. It seems it is. I might set up such a cluster soon. Can you
tell me about the performances you get on it ?

Regards,


wenaideyu wenaideyu a écrit :
> hi
> 
>   I took 4 servers for glusterfs, sev1 and sev 2 were used for
> replicate1, sev3 and sev4 were usered for
> replicate2, replicate1 and replicate2 were combined for nufa, there is
> server.vol in every server:
> 
> server.vol
> 
> volume posix
>   type storage/posix
>   option directory /mnt/data1
> end-volume
> 
> volume locks
>   type features/locks
>   subvolumes posix
> end-volume
> 
> volume brick
>   type performance/io-threads
>   option thread-count 8
>   subvolumes locks
> end-volume
> 
> volume server
>   type protocol/server
>   option transport-type tcp
>   option auth.addr.brick.allow *
>   subvolumes brick
> end-volume
> 
> 
> client.vol in sev1
> volume  sev1
>   type protocol/client
>   option transport-type tcp
>   option remote-host sev1
>   option remote-subvolume brick
> end-volume
> 
> volume  sev2
>  type protocol/client
>  option transport-type tcp
>  option remote-host sev2
>  option remote-subvolume brick
> end-volume
> 
> volume  sev3
>  type protocol/client
>  option transport-type tcp
>  option remote-host sev3
>  option remote-subvolume brick
> end-volume
> 
> volume  sev4
>  type protocol/client
>  option transport-type tcp
>  option remote-host sev4
>  option remote-subvolume brick
> end-volume
> 
> volume replicate1
>   type cluster/replicate
>   subvolumes sev1 sev2
> end-volume
> 
> volume replicate2
>   type cluster/replicate
>   subvolumes sev3 sev4
> end-volume
> 
> volume nufa
>  type cluster/nufa
>  option local-volume-name replicate1
>  subvolumes replicate1 replicate3
> end-volume
> 
> volume writebehind
>   type performance/write-behind
>   option page-size 128KB
>   option cache-size 1MB
>   subvolumes nufa
> end-volume
> 
> volume cache
>   type performance/io-cache
>   option cache-size 512MB
>   subvolumes writebehind
> end-volume
> 
> every server has a disk mounted on /mnt/data1, the disk has a capability
> with 40GB,
> so the glusterfs's total capbility is 80GB,
> 
> client is in sev1, and mount the glusterfs on /mnt/data
> 
> now the problem is that: when the replicate1 is nearly fullfilled, for
> expample when the
> replicate1 is 50MB free, i copy a file A bigger than 50MB into
> /mnt/data, file A will be
> created on replicate1, while the replicate1 is fullfiled, the rest of
> file A can not be
> write on replicate2,and the log in sev1 or sev2 is that:
> 
> [posix.c:1736:posix_writev] posix: writev failed: No space left on device
> 
> anybody can help me??? thanks very much
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users





More information about the Gluster-users mailing list