[Gluster-devel] [RFC] inode table locking contention reduction experiment

Xavi Hernandez jahernan at redhat.com
Wed Oct 30 11:01:49 UTC 2019


Hi Changwei,

On Tue, Oct 29, 2019 at 7:56 AM Changwei Ge <chge at linux.alibaba.com> wrote:

> Hi,
>
> I am recently working on reducing inode_[un]ref() locking contention by
> getting rid of inode table lock. Just use inode lock to protect inode
> REF. I have already discussed a couple rounds with several Glusterfs
> developers via emails and Gerrit and basically get understood on major
> logic around.
>
> Currently, inode REF can be ZERO and be reused by increasing it to ONE.
> This is IMO why we have to burden so much work for inode table when
> REF/UNREF. It makes inode [un]ref() and inode table and dentries(alias)
> searching hard to run concurrently.
>
> So my question is in what cases, how can we find a inode whose REF is ZERO?
>
> As Glusterfs store its inode memory address into kernel/fuse, can we
> conclude that only fuse_ino_to_inode() can bring back a REF=0 inode?
>

Yes, when an inode gets refs = 0, it means that gluster code is not using
it anywhere, so it cannot be referenced again unless kernel sends new
requests on the same inode. Once refs=0 and nlookup=0, the inode can be
destroyed.

Inode code is quite complex right now and I haven't had time to investigate
this further, but I think we could simplify inode management significantly
(specially unref) if we add a reference when nlookup becomes > 0, and
remove a reference when nlookup becomes 0 again. Maybe with this approach
we could avoid inode table lock in many cases. However we need to make sure
we correctly handle invalidation logic to keep inode table size under
control.

Regards,

Xavi


>
> Thanks,
> Changwei
> _______________________________________________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20191030/151bf1e6/attachment.html>


More information about the Gluster-devel mailing list