[Gluster-devel] When inode table is populated?

Raghavendra G raghavendra at gluster.com
Fri Aug 1 01:05:20 UTC 2014


On Wed, Jul 30, 2014 at 12:43 PM, Anoop C S <anoopcs9 at gmail.com> wrote:

>
> On 07/30/2014 12:29 PM, Raghavendra Gowdappa wrote:
>
>>
>> ----- Original Message -----
>>
>>> From: "Jiffin Thottan" <jthottan at redhat.com>
>>> To: gluster-devel at gluster.org
>>> Sent: Wednesday, July 30, 2014 12:22:30 PM
>>> Subject: [Gluster-devel] When  inode table is populated?
>>>
>>> Hi,
>>>
>>> When we were trying to call rename from translator (in reconfigure) using
>>> STACK_WIND , inode table(this->itable) value seems to be null.
>>>
>>> Since inode is required for performing rename, When will inode table gets
>>> populated and Why it is not populated in reconfigure or init?
>>>
>> Not every translator has an inode table (nor it is required to). Only the
>> translators which do inode management (like fuse-bridge, protocol/server,
>> libgfapi, possibly nfsv3 server??) will have an inode table associated with
>> them.
>
>
I was not entirely correct when I made the above statement. Though inode
management is done by above said (fuse-bridge, server, libgfapi etc)
translators, inode tables are _not associated_ with them (with nfsv3 server
as exception which has inode table associated with it).

For a client (fuse-mount, libgfapi etc), the top level xlator (in the
graph) has the inode table. If you had used --volume-name option on client,
the value (which is an xlator name) specified will be set as top.
Otherwise, whatever happens to be the topmost xlator is set as the top.
Usually this is one of acl, meta, worm etc (please refer to the code -
graph.c, which is the "first" xlator in graph).

On the brick process, itables are associated with any of the xlators
"bounded" to the protocol/server. The concept of bounded-xlator was
introduced so that clients have the flexibility to connect to any xlator
(based on what functionality it wants) in the server graph. Bounded xlators
of a server are specified through "option subvolumes" in a brick volfile.
IIRC, we can have multiple bound xlators for a single protocol/server
(though in the volfiles we generate using gluster-cli we use only one).

If you need to access itable, you can do that using inode->table.
>>
>
> Here the old location was created by assigning a fixed gfid using
> STACK_WIND (mkdir) from translator [notify ( )]. And inode was null inside
> its cbk function. Thus old location doesn't have an inode value.


I hope you are speaking in the context of trash translator and probably you
want to create a ".trash" directory in each of the brick. It would be a bit
tricky if you  want to initiate an mkdir fop from notify of an xlator. For
mkdir, pargfid, name and inode members of loc_t structure have to be
populated. The inode would be a new inode (got by calling inode_new ()).
But we need a pointer to itable to pass as an argument to inode_new().
Based on whether you are loading trash translator in client or server, you
can access itable using the explanation given above.


>
>
>>  Or should we create a private inode table and generate inode using it?
>>>
>>
This means your ".trash" directory wouldn't be accessible through standard
inode resolution code in fuse-bridge/libgfapi/server. In short I would
advise you not to use a private inode-table, though probably you can make
it work through some hacks.


>
>>> -Jiffin
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>>>
>>>  _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>>
>
> --
> Anoop C S
> +91 740 609 8342
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140801/13532fe3/attachment.html>


More information about the Gluster-devel mailing list