[Gluster-devel] [Gluster-users] Network Block device (NBD) on top of glusterfs

Xiubo Li xiubli at redhat.com
Sat Mar 23 00:47:59 UTC 2019


On 2019/3/21 11:29, Xiubo Li wrote:
>
> All,
>
> I am one of the contributor forgluster-block 
> <https://github.com/gluster/gluster-block>[1] project, and also I 
> contribute to linux kernel andopen-iscsi 
> <https://github.com/open-iscsi> project.[2]
>
> NBD was around for some time, but in recent time, linux kernel’s 
> Network Block Device (NBD) is enhanced and made to work with more 
> devices and also the option to integrate with netlink is added. So, I 
> tried to provide a glusterfs client based NBD driver recently. Please 
> refergithub issue #633 
> <https://github.com/gluster/glusterfs/issues/633>[3], and good news is 
> I have a working code, with most basic things @nbd-runner project 
> <https://github.com/gluster/nbd-runner>[4].
>
As mentioned the nbd-runner(NBD proto) will work in the same layer with 
tcmu-runner(iSCSI proto), this is not trying to replace the 
gluster-block/ceph-iscsi-gateway great projects.

It just provides the common library to do the low level stuff, like the 
sysfs/netlink operations and the IOs from the nbd kernel socket, and the 
great tcmu-runner project is doing the sysfs/uio operations and IOs from 
the kernel SCSI/iSCSI.

The nbd-cli tool will work like the iscsi-initiator-utils, and the 
nbd-runner daemon will work like the tcmu-runner daemon, that's all.

In tcmu-runner for different backend storages, they have separate 
handlers, glfs.c handler for Gluster, rbd.c handler for Ceph, etc. And 
what the handlers here are doing the actual IOs with the backend storage 
services once the IO paths setup are done by 
ceph-iscsi-gateway/gluster-block....

Then we can support all the kind of backend storages, like the 
Gluster/Ceph/Azure... as one separate handler in nbd-runner, which no 
need to care about the NBD low level's stuff updates and changes.

Thanks.


> While this email is about announcing the project, and asking for more 
> collaboration, I would also like to discuss more about the placement 
> of the project itself. Currently nbd-runner project is expected to be 
> shared by our friends at Ceph project too, to provide NBD driver for 
> Ceph. I have personally worked with some of them closely while 
> contributing to open-iSCSI project, and we would like to take this 
> project to great success.
>
> Now few questions:
>
>  1. Can I continue to usehttp://github.com/gluster/nbd-runneras home
>     for this project, even if its shared by other filesystem projects?
>
>   * I personally am fine with this.
>
>  2. Should there be a separate organization for this repo?
>
>   * While it may make sense in future, for now, I am not planning to
>     start any new thing?
>
> It would be great if we have some consensus on this soon as nbd-runner 
> is a new repository. If there are no concerns, I will continue to 
> contribute to the existing repository.
>
> Regards,
> Xiubo Li (@lxbsz)
>
> [1] -https://github.com/gluster/gluster-block
> [2] -https://github.com/open-iscsi
> [3] -https://github.com/gluster/glusterfs/issues/633
> [4] -https://github.com/gluster/nbd-runner
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190323/7fcd78e4/attachment-0001.html>


More information about the Gluster-devel mailing list