[Gluster-devel] inode lru limit

Raghavendra Gowdappa rgowdapp at redhat.com
Tue Jun 3 07:36:42 UTC 2014


> >> Hi,
> >>
> >> But as of now the inode table is bound to bound_xl which is associated
> >> with
> >> the client_t object for the client being connected. As part of fops we can
> >> get the bound_xl (thus the inode table) from the rpc request
> >> (req->trans->xl_private). But in reconfigure we get just the xlator
> >> pointer
> >> of protocol/server and dict containing new options.
> >>
> >> So what I am planning is this. If the xprt_list (transport list
> >> corresponding
> >> to the clients mounted) is empty, then just set the private structure's
> >> variable for lru limit (which will be used to create the inode table when
> >> a
> >> client mounts). If xprt_list of protocol/server's private structure is not
> >> empty, then get one of the transports from that list and get the client_t
> >> object corresponding to the transport, from which bould_xl is obtained
> >> (all
> >> the client_t objects share the same inode table) . Then from bound_xl
> >> pointer to inode table is got and its variable for lru limit is also set
> >> to
> >> the value specified via cli and inode_table_prune is called to purge the
> >> extra inodes.
> > In the above proposal if there are no active clients, lru limit of itable
> > is not reconfigured. Here are two options to improve correctness of your
> > proposal.


> If there are no active clients, then there will not be any itable.
> itable will be created when 1st client connects to the brick. And while
> creating the itable we use the inode_lru_limit variable present in
> protocol/server's private structure and inode table that is created also
> saves the same value.

A zero client current count doesn't mean that itables are absent in bounded_xl. There can be previous connections which resulted in itable creations.

> > 1. On a successful handshake, you check whether the lru_limit of itable is
> > equal to configured value. If not equal, set it to the configured value
> > and prune the itable. The cost is that you check inode table's lru limit
> > on every client connection.
> On successful handshake, for the 1st client inode table will be created
> with lru_limit value saved in protocol/server's private. For further
> handshakes since inode table is already there, new inode tables will not
> be created. So instead of waiting for a new handshake to happen to set
> the lru_limit and purge the inode table, I think its better to do it as
> part of reconfigure itself.
> >
> > 2. Traverse through the list of all xlators (since there is no easy way of
> > finding potential candidates for bound_xl other than peaking into options
> > specific to authentication) and if there is an itable associated with that
> > xlator, set its lru limit and prune it. The cost here is traversing the
> > list of xlators. However, our xlator list in brick process is relatively
> > small, this shouldn't have too much performance impact.
> >
> > Comments are welcome.
> 
> Regards,
> Raghavendra Bhat


More information about the Gluster-devel mailing list