[Gluster-Maintainers] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

Jim Kinney jim.kinney at gmail.com
Thu Jul 19 12:36:38 UTC 2018


Too bad the RDMA will be abandoned. It's the perfect transport for intranode processing and data sync.

I currently use RDMA on a computational cluster between nodes and gluster storage. The older IB cards will support 10G IP and 40G IB. I've had some success with connectivity but am still faltering with fuse performance. As soon as some retired gear is reconnected I'll have a test bed for HA NFS over RDMA to computational cluster and 10G IP to non-cluster systems.

But it looks like Gluster 6 is a ways away so maybe I'll get more hardware or time to pitch in some code after groking enough IB.

Thanks for the heads up and all the work to date. 

On July 19, 2018 2:56:35 AM EDT, Amar Tumballi <atumball at redhat.com> wrote:
>*Hi all,Over last 12 years of Gluster, we have developed many features,
>and
>continue to support most of it till now. But along the way, we have
>figured
>out better methods of doing things. Also we are not actively
>maintaining
>some of these features.We are now thinking of cleaning up some of these
>‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
>totally
>taken out of codebase in following releases) in next upcoming release,
>v5.0. The release notes will provide options for smoothly migrating to
>the
>supported configurations.If you are using any of these features, do let
>us
>know, so that we can help you with ‘migration’.. Also, we are happy to
>guide new developers to work on those components which are not actively
>being maintained by current set of developers.List of features hitting
>sunset:‘cluster/stripe’ translator:This translator was developed very
>early
>in the evolution of GlusterFS, and addressed one of the very common
>question of Distributed FS, which is “What happens if one of my file is
>bigger than the available brick. Say, I have 2 TB hard drive, exported
>in
>glusterfs, my file is 3 TB”. While it solved the purpose, it was very
>hard
>to handle failure scenarios, and give a real good experience to our
>users
>with this feature. Over the time, Gluster solved the problem with it’s
>‘Shard’ feature, which solves the problem in much better way, and
>provides
>much better solution with existing well supported stack. Hence the
>proposal
>for Deprecation.If you are using this feature, then do write to us, as
>it
>needs a proper migration from existing volume to a new full supported
>volume type before you upgrade.‘storage/bd’ translator:This feature got
>into the code base 5 years back with this patch
><http://review.gluster.org/4809>[1]. Plan was to use a block device
>directly as a brick, which would help to handle disk-image storage much
>easily in glusterfs.As the feature is not getting more contribution,
>and we
>are not seeing any user traction on this, would like to propose for
>Deprecation.If you are using the feature, plan to move to a supported
>gluster volume configuration, and have your setup ‘supported’ before
>upgrading to your new gluster version.‘RDMA’ transport support:Gluster
>started supporting RDMA while ib-verbs was still new, and very high-end
>infra around that time were using Infiniband. Engineers did work with
>Mellanox, and got the technology into GlusterFS for better data
>migration,
>data copy. While current day kernels support very good speed with IPoIB
>module itself, and there are no more bandwidth for experts in these
>area to
>maintain the feature, we recommend migrating over to TCP (IP based)
>network
>for your volume.If you are successfully using RDMA transport, do get in
>touch with us to prioritize the migration plan for your volume. Plan is
>to
>work on this after the release, so by version 6.0, we will have a
>cleaner
>transport code, which just needs to support one type.‘Tiering’
>featureGluster’s tiering feature which was planned to be providing an
>option to keep your ‘hot’ data in different location than your cold
>data,
>so one can get better performance. While we saw some users for the
>feature,
>it needs much more attention to be completely bug free. At the time, we
>are
>not having any active maintainers for the feature, and hence suggesting
>to
>take it out of the ‘supported’ tag.If you are willing to take it up,
>and
>maintain it, do let us know, and we are happy to assist you.If you are
>already using tiering feature, before upgrading, make sure to do
>gluster
>volume tier detach all the bricks before upgrading to next release.
>Also,
>we recommend you to use features like dmcache on your LVM setup to get
>best
>performance from bricks.‘Quota’This is a call out for ‘Quota’ feature,
>to
>let you all know that it will be ‘no new development’ state. While this
>feature is ‘actively’ in use by many people, the challenges we have in
>accounting mechanisms involved, has made it hard to achieve good
>performance with the feature. Also, the amount of extended attribute
>get/set operations while using the feature is not very ideal. Hence we
>recommend our users to move towards setting quota on backend bricks
>directly (ie, XFS project quota), or to use different volumes for
>different
>directories etc.As the feature wouldn’t be deprecated immediately, the
>feature doesn’t need a migration plan when you upgrade to newer
>version,
>but if you are a new user, we wouldn’t recommend setting quota feature.
>By
>the release dates, we will be publishing our best alternatives guide
>for
>gluster’s current quota feature.Note that if you want to contribute to
>the
>feature, we have project quota based issue open
><https://github.com/gluster/glusterfs/issues/184>[2] Happy to get
>contributions, and help in getting a newer approach to
>Quota.------------------------------These are our set of initial
>features
>which we propose to take out of ‘fully’ supported features. While we
>are in
>the process of making the user/developer experience of the project much
>better with providing well maintained codebase, we may come up with few
>more set of features which we may possibly consider to move out of
>support,
>and hence keep watching this space.[1] - http://review.gluster.org/4809
><http://review.gluster.org/4809>[2] -
>https://github.com/gluster/glusterfs/issues/184
><https://github.com/gluster/glusterfs/issues/184>Regards,Vijay, Shyam,
>Amar*

-- 
Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20180719/7653fd3e/attachment-0001.html>


More information about the maintainers mailing list