[Gluster-devel] ls: .: no such file or directory

Amar S. Tumballi amar at zresearch.com
Thu Jul 12 05:22:22 UTC 2007


Daniel,
 Thats the reason why we say namespace is the 'single point' of failure :|
As fuse FS works based on inode number, and we send inode number from
namespace brick to the fuse layer (if unify is used), if namespace is down,
we return -1, with file not found error. One solution right now is making
AFR'd namespace.

 In future, we are planing to come with distributed namespace so problems of
single point of failure, lack of inodes etc can be solved. But with
1.3.xreleases, it will be like this :|

-amar

On 7/12/07, Daniel van Ham Colchete <daniel.colchete at gmail.com> wrote:
>
> On 7/11/07, DeeDee Park <deedee6905 at hotmail.com> wrote:
> >
> > if all the bricks are not up at the time of the gluster client startup
> > i get the above error message. if all bricks are up, things are fine.
> > if the brick goes down after a client is up, things are fine -- it is
> only
> > at startup.
> > i'm still seeing this in the latest patch-299
> >
>
> I was able to reproduce the problem here.
>
> I will have the error message if, and only if, the namespace cache brick
> is
> offline. I have the error even if the directory is full of files. If I try
> to open() a file while the namespace cache brick is down I get the
> "Transport endpoint is not connected" error.
>
> Also with patch-299.
>
> Client spec file:volume client-1
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 127.0.0.1
>         option remote-port 6991
>         option remote-subvolume brick1
> end-volume
>
> volume client-2
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 127.0.0.1
>         option remote-port 6992
>         option remote-subvolume brick2
> end-volume
>
> volume client-ns
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 127.0.0.1
>         option remote-port 6999
>         option remote-subvolume brick-ns
> end-volume
>
> volume afr
>         type cluster/afr
>         subvolumes client-1 client-2
>         option replicate *:2
>         option self-heal on
>         option debug off
> end-volume
>
> volume unify
>         type cluster/unify
>         subvolumes afr
>         option namespace client-ns
>         option scheduler rr
>         option rr.limits.min-free-disk 5
> end-volume
>
> volume writebehind
>         type performance/write-behind
>         option aggregate-size 131072
>         subvolumes unify
> end-volume
>
> Best regards,
> Daniel Colchete
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi
http://amar.80x25.org
[bulde on #gluster/irc.gnu.org]



More information about the Gluster-devel mailing list