[Gluster-devel] How namespace works in mainline-2.5?

Dale Dude dale at oc3networks.com
Thu Jun 21 18:31:54 UTC 2007


Thanks Amar for the confirmations. I just checked out up to patch 185 
and haven't had an issue yet while rsyncing to the volume. Been running 
about 1/2 hour now. Love the new debug output.

As for the doc/example...I see the cluster-client.vol was fixed, but the 
bricks-ns isnt configured in any of the cluster-server#.vol files.

Huge Regards,
Dale

Amar S. Tumballi wrote:
> Hi,
>  Sorry for less info regarding the changes made to unify in 2.5 
> branch. We haven't yet updated wiki, as release is not yet made. but 
> wrong example in doc/* is my mistake :(
>
>  Yes, you have figured it right, namespace is NOT a subvolume of 
> unify. Also about the schedulers complaining about the not enough 
> space, all the schedulers now uses % based min-disk-size, (option 
> alu.limits.min-disk-size 10 #don't consider the node for scheduling if 
> the free disk space is less than 10%).
>
>  About the 'ls' failing, i may need little more info. First, which 
> patch are you using. And, an output of 'bt' after doing gdb glusterfs 
> -c /core.<pid>.
>
> Regards,
> Amar
>
> On 6/21/07, *Dale Dude* <dale at oc3networks.com 
> <mailto:dale at oc3networks.com>> wrote:
>
>     How do I use the namespace option correctly? In the docs/examples I
>     think your using it wrong.
>
>     In cluster-client.vol you have volumes client1,client2,client3
>     configured and this line in cluster/unify:
>     option namespace client1 # this will not be storage child of unify.
>     subvolumes client1 client2 client3
>
>     I get the impression that namespace shouldnt be configured as a
>     subvolume of unify. If I use a namespace of a volume that is a
>     subvolume
>     of unify it complains as such. If I create a volume on the server
>     specifically for the namespace (i.e. 1 storage/posix called
>     volumenamespace) and in the client config use that then it doesnt
>     complain anymore. But I cant even ls on the mounted volume. I get this
>     debug found below. A 'df -h' looks correct (glusterfs 5.9T  400M  5.9T
>     1% /volumes).
>
>     Btw, I have to use the ALU scheduler because any other scheduler keeps
>     saying that there isnt enough space on any of the "bricks".
>



More information about the Gluster-devel mailing list