[Gluster-devel] [Gluster-users] Network Block device (NBD) on top of glusterfs

Vijay Bellur vbellur at redhat.com
Mon Mar 25 06:36:46 UTC 2019


Hi Xiubo,

On Fri, Mar 22, 2019 at 5:48 PM Xiubo Li <xiubli at redhat.com> wrote:

> On 2019/3/21 11:29, Xiubo Li wrote:
>
> All,
>
> I am one of the contributor for gluster-block
> <https://github.com/gluster/gluster-block>[1] project, and also I
> contribute to linux kernel and open-iscsi <https://github.com/open-iscsi>
> project.[2]
>
> NBD was around for some time, but in recent time, linux kernel’s Network
> Block Device (NBD) is enhanced and made to work with more devices and also
> the option to integrate with netlink is added. So, I tried to provide a
> glusterfs client based NBD driver recently. Please refer github issue #633
> <https://github.com/gluster/glusterfs/issues/633>[3], and good news is I
> have a working code, with most basic things @ nbd-runner project
> <https://github.com/gluster/nbd-runner>[4].
>
>
This is nice. Thank you for your work!


> As mentioned the nbd-runner(NBD proto) will work in the same layer with
> tcmu-runner(iSCSI proto), this is not trying to replace the
> gluster-block/ceph-iscsi-gateway great projects.
>
> It just provides the common library to do the low level stuff, like the
> sysfs/netlink operations and the IOs from the nbd kernel socket, and the
> great tcmu-runner project is doing the sysfs/uio operations and IOs from
> the kernel SCSI/iSCSI.
>
> The nbd-cli tool will work like the iscsi-initiator-utils, and the
> nbd-runner daemon will work like the tcmu-runner daemon, that's all.
>

Do you have thoughts on how nbd-runner currently differs or would differ
from tcmu-runner? It might be useful to document the differences in github
(or elsewhere) so that users can make an informed choice between nbd-runner
& tcmu-runner.

In tcmu-runner for different backend storages, they have separate handlers,
> glfs.c handler for Gluster, rbd.c handler for Ceph, etc. And what the
> handlers here are doing the actual IOs with the backend storage services
> once the IO paths setup are done by ceph-iscsi-gateway/gluster-block....
>
> Then we can support all the kind of backend storages, like the
> Gluster/Ceph/Azure... as one separate handler in nbd-runner, which no need
> to care about the NBD low level's stuff updates and changes.
>

Given that the charter for this project is to support multiple backend
storage projects, would not it be better to host the project in the github
repository associated with nbd [5]? Doing it that way could provide a more
neutral (as perceived by users) venue for hosting nbd-runner and help you
in getting more adoption for your work.

Thanks,
Vijay

[5] https://github.com/NetworkBlockDevice/nbd




> Thanks.
>
>
> While this email is about announcing the project, and asking for more
> collaboration, I would also like to discuss more about the placement of the
> project itself. Currently nbd-runner project is expected to be shared by
> our friends at Ceph project too, to provide NBD driver for Ceph. I have
> personally worked with some of them closely while contributing to
> open-iSCSI project, and we would like to take this project to great success.
>
> Now few questions:
>
>    1. Can I continue to use http://github.com/gluster/nbd-runner as home
>    for this project, even if its shared by other filesystem projects?
>
>
>    - I personally am fine with this.
>
>
>    1. Should there be a separate organization for this repo?
>
>
>    - While it may make sense in future, for now, I am not planning to
>    start any new thing?
>
> It would be great if we have some consensus on this soon as nbd-runner is
> a new repository. If there are no concerns, I will continue to contribute
> to the existing repository.
>
> Regards,
> Xiubo Li (@lxbsz)
>
> [1] - https://github.com/gluster/gluster-block
> [2] - https://github.com/open-iscsi
> [3] - https://github.com/gluster/glusterfs/issues/633
> [4] - https://github.com/gluster/nbd-runner
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190324/4a6e625e/attachment.html>


More information about the Gluster-devel mailing list