[Gluster-devel] inode lru limit

Raghavendra Bhat rabhat at redhat.com
Mon Jun 2 13:11:30 UTC 2014


On Monday 02 June 2014 11:06 AM, Raghavendra G wrote:
>
>
> On Fri, May 30, 2014 at 2:24 PM, Raghavendra Bhat <rabhat at redhat.com 
> <mailto:rabhat at redhat.com>> wrote:
>
>
>     Hi,
>
>     Currently the lru-limit of the inode table in brick processes is
>     16384. There is a option to configure it to some other value. The
>     protocol/server uses inode_lru_limit variable present in its
>     private structure while creating the inode table (whose default
>     value is 16384). When the option is reconfigured via volume set
>     option the protocol/server's inode_lru_limit variable present in
>     its private structure is changed. But the actual size of the inode
>     table still remains same as old one. Only when the brick is
>     restarted the newly set value comes into picture. Is it ok? Should
>     we change the inode table's lru_limit variable also as part of
>     reconfigure? If so, then probably we might have to remove the
>     extra inodes present in the lru list by calling inode_table_prune.
>
>
> Yes, I think we should change the inode table's lru limit too and call 
> inode_table_prune. From what I know, I don't think this change would 
> cause any problems.
>

But as of now the inode table is bound to bound_xl which is associated 
with the client_t object for the client being connected. As part of fops 
we can get the bound_xl  (thus the inode table) from the rpc request 
(req->trans->xl_private). But in reconfigure we get just the xlator 
pointer of protocol/server and dict containing new options.

So what I am planning is this. If the xprt_list (transport list 
corresponding to the clients mounted) is empty, then just set the 
private structure's variable for lru limit (which will be used to create 
the inode table when a client mounts). If xprt_list of protocol/server's 
private structure is not empty, then get one of the transports from that 
list and get the client_t object corresponding to the transport, from 
which bould_xl is obtained (all the client_t objects share the same 
inode table) . Then from bound_xl pointer to inode table is got and its 
variable for lru limit is also set to the value specified via cli and 
inode_table_prune is called to purge the extra inodes.

Does it sound OK?

Regards,
Raghavendra Bhat

Regards,
Raghavendra Bhat
>
>
>     Please provide feedback
>
>
>     Regards,
>     Raghavendra Bhat
>     _______________________________________________
>     Gluster-devel mailing list
>     Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
>     http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>
>
>
>
> -- 
> Raghavendra G
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140602/4833434f/attachment.html>


More information about the Gluster-devel mailing list