[Gluster-devel] Integrating liburcu source into the glusterfs source tree
Kaushal M
kshlmster at gmail.com
Tue Feb 3 05:41:50 UTC 2015
liburcu is licensed under LGPLv2.1, and can be used by any software
compatible with LGPL. IBM, the owners of the patent, provided their
approval for this licensing [1]. We are good with regards to this.
The liburcu homepage mentions that it has been tested on Linux and FreeBSD,
but it should work on NetBSD as well. NetBSD has actively maintained
package of liburcu [3] required by KnotDNS (another project which uses
liburcu), so I'm assuming there aren't any problems there as well. We will
test our changes on these three platforms to guarantee that it indeed
works.
We've been referring to the PhD dissertation on RCU by Paul McKenney [4]
for help with implementation. Sections 5 and 6 of the dissertation discuss
RCU design patterns and examples of conversion to non-RCU code to RCU. This
has been a good reference for us so far.
~kaushal
[1]: https://github.com/urcu/userspace-rcu/blob/master/lgpl-relicensing.txt
[2]: http://urcu.so/ Under 'Architectures supported'
[3]: http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/devel/userspace-rcu/
[4]: http://www.rdrop.com/~paulmck/RCU/RCUdissertation.2004.07.14e1.pdf
On Tue, Feb 3, 2015 at 10:24 AM, Anand Avati <avati at gluster.org> wrote:
> Apologies for the top post.
>
> Adopting RCU is a good step. Some questions and thoughts -
>
> Does urcu work on non Linux systems, netbsd? IIRC there were Linux
> specific permissions on the rcu patent? Maybe only for the kernel? Would be
> good to confirm.
>
> Glusterd is a good place for the first prototype adoption of rcu, esp
> figuring out the nuances of liburcu (in my view). The perfect use case for
> liburcu is still brewing in the form of epoll multithreading. That patch
> creates the perfect conditions on the server side with many threads
> servicing many clients bouncing the cacheline on so many shared objects and
> locks - where rcu comes to the rescue. Starting with the events.c shared FD
> registry, client_t registry, call-pool registry, inode table, each of these
> are candidates which ask for rcu conversion. The unfortunate part is that
> cacheline bouncing fixes are all or nothing. As long as there is at least
> one shared lock in the hot path, the hard work gone into all the previous
> shared lock fixes remain latent. However the end result is well worth all
> the efforts.
>
> Thanks
>
> On Thu, Jan 29, 2015, 03:35 Kaushal M <kshlmster at gmail.com> wrote:
>
> Hi all,
>
> I had started a thread previously on the efforts we are undertaking to
> improve thread synchronization in GlusterD [1]. I had mentioned that we
> will be using RCU for synchronization and the userspace RCU library
> (liburcu) [2] for implementation.
>
> I am now in a almost in a position to submit changes to Gerrit for review.
> But, I have an obstacle of making liburcu available on the jenkins slaves.
>
> I have begun development using the 0.8.6 version of liburcu, which is the
> latest stable release. EPEL has liburcu packages for CentOS 6 and 7, but
> they are the of the older 0.7.* versions. Fedora has packages more recent
> packages, but they are still older, 0.8.1. [3].
>
> Considering the above situation with binary packages, I'm considering
> adding liburcu into the GlusterFS tree as a part of /contrib. This will be
> similar in vein to the argp-standalone library.
>
> liburcu is licensed under LGPL-v2.1, so I don't think there is going to be
> any problem including it. But IANAL, so I would like to know of if this
> would if this is okay from a legal perspective.
>
> I'll add the liburcu source to our tree and push the change for review.
> I'm not really familiar with autotools, so I'll need some help integrating
> it into our build system. I'll update the list when I have pushed the
> change for review.
>
> In the meantime, I'd like to know if anyone has any objections to this
> plan. And also want to know of any alternative approaches.
>
> ~kaushal
>
> [1]: http://
> <http://www.gluster.org/pipermail/gluster-devel/2014-December/043382.html>
> www.gluster.org
> <http://www.gluster.org/pipermail/gluster-devel/2014-December/043382.html>
> /
> <http://www.gluster.org/pipermail/gluster-devel/2014-December/043382.html>
> pipermail
> <http://www.gluster.org/pipermail/gluster-devel/2014-December/043382.html>
> /
> <http://www.gluster.org/pipermail/gluster-devel/2014-December/043382.html>
> gluster-devel
> <http://www.gluster.org/pipermail/gluster-devel/2014-December/043382.html>
> /2014-December/043382.html
> <http://www.gluster.org/pipermail/gluster-devel/2014-December/043382.html>
>
> [2]: http:// <http://urcu.so/>urcu.so/ <http://urcu.so/>
>
> [3]: https <https://apps.fedoraproject.org/packages/userspace-rcu>://
> <https://apps.fedoraproject.org/packages/userspace-rcu>
> apps.fedoraproject.org
> <https://apps.fedoraproject.org/packages/userspace-rcu>/packages/
> <https://apps.fedoraproject.org/packages/userspace-rcu>userspace-rcu
> <https://apps.fedoraproject.org/packages/userspace-rcu>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http:// <http://www.gluster.org/mailman/listinfo/gluster-devel>
> www.gluster.org <http://www.gluster.org/mailman/listinfo/gluster-devel>
> /mailman/ <http://www.gluster.org/mailman/listinfo/gluster-devel>listinfo
> <http://www.gluster.org/mailman/listinfo/gluster-devel>/
> <http://www.gluster.org/mailman/listinfo/gluster-devel>gluster-devel
> <http://www.gluster.org/mailman/listinfo/gluster-devel>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150203/657a1668/attachment.html>
More information about the Gluster-devel
mailing list