[Gluster-users] [Gluster-devel] Network Block device (NBD) on top of glusterfs

Xiubo Li xiubli at redhat.com
Thu Mar 21 13:01:02 UTC 2019

On 2019/3/21 18:09, Prasanna Kalever wrote:
> On Thu, Mar 21, 2019 at 9:00 AM Xiubo Li <xiubli at redhat.com 
> <mailto:xiubli at redhat.com>> wrote:
>     All,
>     I am one of the contributor forgluster-block
>     <https://github.com/gluster/gluster-block>[1] project, and also I
>     contribute to linux kernel andopen-iscsi
>     <https://github.com/open-iscsi> project.[2]
>     NBD was around for some time, but in recent time, linux kernel’s
>     Network Block Device (NBD) is enhanced and made to work with more
>     devices and also the option to integrate with netlink is added.
>     So, I tried to provide a glusterfs client based NBD driver
>     recently. Please refergithub issue #633
>     <https://github.com/gluster/glusterfs/issues/633>[3], and good
>     news is I have a working code, with most basic things @nbd-runner
>     project <https://github.com/gluster/nbd-runner>[4].
>     While this email is about announcing the project, and asking for
>     more collaboration, I would also like to discuss more about the
>     placement of the project itself. Currently nbd-runner project is
>     expected to be shared by our friends at Ceph project too, to
>     provide NBD driver for Ceph. I have personally worked with some of
>     them closely while contributing to open-iSCSI project, and we
>     would like to take this project to great success.
>     Now few questions:
>      1. Can I continue to usehttp://github.com/gluster/nbd-runneras
>         home for this project, even if its shared by other filesystem
>         projects?
>       * I personally am fine with this.
>      2. Should there be a separate organization for this repo?
>       * While it may make sense in future, for now, I am not planning
>         to start any new thing?
>     It would be great if we have some consensus on this soon as
>     nbd-runner is a new repository. If there are no concerns, I will
>     continue to contribute to the existing repository.
> Thanks Xiubo Li, for finally sending this email out. Since this email 
> is out on gluster mailing list, I would like to take a stand from 
> gluster community point of view *only* and share my views.
> My honest answer is "If we want to maintain this within gluster org, 
> then 80% of the effort is common/duplicate of what we did all these 
> days with gluster-block",
The great idea came from Mike Christie days ago and the nbd-runner 
project's framework is initially emulated from tcmu-runner. This is why 
I name this project as nbd-runner, which will work for all the other 
Distributed Storages, such as Gluster/Ceph/Azure, as discussed with Mike 

nbd-runner(NBD proto) and tcmu-runner(iSCSI proto) are almost the same 
and both are working as lower IO(READ/WRITE/...) stuff, not the 
management layer like ceph-iscsi-gateway and gluster-block currently do.

Currently since I only implemented the Gluster handler and also using 
the RPC like glusterfs and gluster-block, most of the other code (about 
70%) in nbd-runner are for the NBD proto and these are very different 
from tcmu-runner/glusterfs/gluster-block projects, and there are many 
new features in NBD module that not yet supported and then there will be 
more different in future.

The framework coding has been done and the nbd-runner project is already 
stable and could already work well for me now.

> like:
> * rpc/socket code
> * cli/daemon parser/helper logics
> * gfapi util functions
> * logger framework
> * inotify & dyn-config threads

Yeah, these features were initially from tcmu-runner project, Mike and I 
coded two years ago. Currently nbd-runner also has copied them from 

Very appreciated for you great ideas here Prasanna and hope nbd-runner 
could be more generically and successfully used in future.


Xiubo Li

> * configure/Makefile/specfiles
> * docsAboutGluster and etc ..
> The repository gluster-block is actually a home for all the block 
> related stuff within gluster and its designed to accommodate alike 
> functionalities, if I was you I would have simply copied nbd-runner.c 
> into https://github.com/gluster/gluster-block/tree/master/daemon/ just 
> like ceph plays it here 
> https://github.com/ceph/ceph/blob/master/src/tools/rbd_nbd/rbd-nbd.cc 
> and be done.
> Advantages of keeping nbd client within gluster-block:
> -> No worry about maintenance code burdon
> -> No worry about monitoring a new component
> -> shipping packages to fedora/centos/rhel is handled
> -> This helps improve and stabilize the current gluster-block framework
> -> We can build a common CI
> -> We can use reuse common test framework and etc ..
> If you have an impression that gluster-block is for management, then I 
> would really want to correct you at this point.
> Some of my near future plans for gluster-block:
> * Allow exporting blocks with FUSE access via fileIO backstore to 
> improve large-file workloads, draft: 
> https://github.com/gluster/gluster-block/pull/58
> * Accommodate kernel loopback handling for local only applications
> * The same way we can accommodate nbd app/client, and IMHO this effort 
> shouldn't take 1 or 2 days to get it merged with in gluster-block and 
> ready for a go release.
> Hope that clarifies it.
> Best Regards,
> --
> Prasanna
>     Regards,
>     Xiubo Li (@lxbsz)
>     [1] -https://github.com/gluster/gluster-block
>     [2] -https://github.com/open-iscsi
>     [3] -https://github.com/gluster/glusterfs/issues/633
>     [4] -https://github.com/gluster/nbd-runner
>     _______________________________________________
>     Gluster-devel mailing list
>     Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
>     https://lists.gluster.org/mailman/listinfo/gluster-devel

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190321/bf7b1f35/attachment.html>

More information about the Gluster-users mailing list