[Gluster-devel] FIRST_CHILD(frame->this)->fops->create

Ian Latter ian.latter at midnightcode.org
Fri Aug 7 09:08:36 UTC 2009


> For reasons explained further below, it is not "right" to
create your
> inodes from a globally-reachable inode table (which does
not exist
> anyway). Almost all the time, you would be creating these new
> files/directories in the context of a particular call, or
have it
> triggered by a similar call. So most of the times, the
right inode
> table should be taken from loc->inode->itable, or
> according to the particular fop in picture.

Okay, I believe I understand your reasoning, but this
would not appear to alleviate my problem of trying to
access a directory that is not seen as related to the 
call of the parent xlator/brick.

i.e.  parent xlator; 
         write(/x/y/target.txt, data)
      my xlator; 
         alter that data, making notes
         write(/x/y/target.txt, altered)
         write(/a/b/c/file.txt, notes)

Meaning that I can readily retrieve context for
the /x/y and /x/y/target.txt relationship, but not
for the /a/b/c and /a/b/c/file.txt relationship.

This makes sense for almost every case; I don't
understand the path-translator - how does it 
avoid the need to play with the inode tables of
the parent/child to achieve its outcome?

Maybe it didn't .. hmm .. ok ... That aside;

> There is a reason why just a few @this have itable while
others do
> not. On the client side, only the fuse's @this has a
proper itable
> initialized at mount time. On the server side, each
subvolume of
> protcol/server has a different itable of its own. Since
two posix
> exports from a single backend cannot share the same
itable, each of
> their itable is stored in their respective @this
structures. And this
> itable is initialized only when the first client attaches
to this as
> its remote-subvolume (i.e, during the setvolume MOP, which
is the
> handshake + authentication).

... am I right to believe that if I set up my own 
mop->subvolume that I would then be gifted with a
populated itable?

Would that be the appropriate way for me to obtain
a populated itable, even in the case where my xlator
is not an immediate child of the server xlator?

I.e. - this is my test glusterfs.vol;

volume posix
  type storage/posix
  option directory /gluster-test-mount

volume locks
  type features/locks
  subvolumes posix

volume testfeature
  type features/testfeature
  subvolumes locks

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes testfeature

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  subvolumes brick

Thanks for your help,

Ian Latter
Late night coder ..

More information about the Gluster-devel mailing list