[Gluster-Maintainers] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

Amar Tumballi Suryanarayan atumball at redhat.com
Tue Mar 19 13:10:26 UTC 2019


Hi Hans,

Thanks for the honest feedback. Appreciate this.

On Tue, Mar 19, 2019 at 5:39 PM Hans Henrik Happe <happe at nbi.dk> wrote:

> Hi,
>
> Looking into something else I fell over this proposal. Being a shop that
> are going into "Leaving GlusterFS" mode, I thought I would give my two
> cents.
>
> While being partially an HPC shop with a few Lustre filesystems,  we chose
> GlusterFS for an archiving solution (2-3 PB), because we could find files
> in the underlying ZFS filesystems if GlusterFS went sour.
>
> We have used the access to the underlying files plenty, because of the
> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
> effortless to run and mainly for that reason we are planning to move away
> from GlusterFS.
>
> Reading this proposal kind of underlined that "Leaving GluserFS" is the
> right thing to do. While I never understood why GlusterFS has been in
> feature crazy mode instead of stabilizing mode, taking away crucial
> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
> very useful, even though the current implementation are not perfect.
> Tiering also makes so much sense, but, for large files, not on a per-file
> level.
>
>
It is a right concern to raise, and removing the existing features is not a
good thing most of the times. But, one thing we noticed over the years is,
the features which we develop, and not take to completion cause the major
heart-burn. People think it is present, and it is already few years since
its introduced, but if the developers are not working on it, users would
always feel that the product doesn't work, because that one feature didn't
work.

Other than Quota in the proposal email, for all other features, even though
we have *some* users, we are inclined towards deprecating them, considering
projects overall goals of stability in the longer run.


> To be honest we only use quotas. We got scared of trying out new
> performance features that potentially would open up a new back of issues.
>
> About Quota, we heard enough voices, so we will make sure we keep it. The
original email was 'Proposal', and hence these opinions matter for decision.

Sorry for being such a buzzkill. I really wanted it to be different.
>
> We hear you. Please let us know one thing, which were the versions you
tried ?

We hope in coming months, our recent focus on Stability and Technical debt
reduction will help you to re-look at Gluster after sometime.


> Cheers,
> Hans Henrik
> On 19/07/2018 08.56, Amar Tumballi wrote:
>
>
> * Hi all, Over last 12 years of Gluster, we have developed many features,
> and continue to support most of it till now. But along the way, we have
> figured out better methods of doing things. Also we are not actively
> maintaining some of these features. We are now thinking of cleaning up some
> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
> totally taken out of codebase in following releases) in next upcoming
> release, v5.0. The release notes will provide options for smoothly
> migrating to the supported configurations. If you are using any of these
> features, do let us know, so that we can help you with ‘migration’.. Also,
> we are happy to guide new developers to work on those components which are
> not actively being maintained by current set of developers. List of
> features hitting sunset: ‘cluster/stripe’ translator: This translator was
> developed very early in the evolution of GlusterFS, and addressed one of
> the very common question of Distributed FS, which is “What happens if one
> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
> exported in glusterfs, my file is 3 TB”. While it solved the purpose, it
> was very hard to handle failure scenarios, and give a real good experience
> to our users with this feature. Over the time, Gluster solved the problem
> with it’s ‘Shard’ feature, which solves the problem in much better way, and
> provides much better solution with existing well supported stack. Hence the
> proposal for Deprecation. If you are using this feature, then do write to
> us, as it needs a proper migration from existing volume to a new full
> supported volume type before you upgrade. ‘storage/bd’ translator: This
> feature got into the code base 5 years back with this patch
> <http://review.gluster.org/4809>[1]. Plan was to use a block device
> directly as a brick, which would help to handle disk-image storage much
> easily in glusterfs. As the feature is not getting more contribution, and
> we are not seeing any user traction on this, would like to propose for
> Deprecation. If you are using the feature, plan to move to a supported
> gluster volume configuration, and have your setup ‘supported’ before
> upgrading to your new gluster version. ‘RDMA’ transport support: Gluster
> started supporting RDMA while ib-verbs was still new, and very high-end
> infra around that time were using Infiniband. Engineers did work with
> Mellanox, and got the technology into GlusterFS for better data migration,
> data copy. While current day kernels support very good speed with IPoIB
> module itself, and there are no more bandwidth for experts in these area to
> maintain the feature, we recommend migrating over to TCP (IP based) network
> for your volume. If you are successfully using RDMA transport, do get in
> touch with us to prioritize the migration plan for your volume. Plan is to
> work on this after the release, so by version 6.0, we will have a cleaner
> transport code, which just needs to support one type. ‘Tiering’ feature
> Gluster’s tiering feature which was planned to be providing an option to
> keep your ‘hot’ data in different location than your cold data, so one can
> get better performance. While we saw some users for the feature, it needs
> much more attention to be completely bug free. At the time, we are not
> having any active maintainers for the feature, and hence suggesting to take
> it out of the ‘supported’ tag. If you are willing to take it up, and
> maintain it, do let us know, and we are happy to assist you. If you are
> already using tiering feature, before upgrading, make sure to do gluster
> volume tier detach all the bricks before upgrading to next release. Also,
> we recommend you to use features like dmcache on your LVM setup to get best
> performance from bricks. ‘Quota’ This is a call out for ‘Quota’ feature, to
> let you all know that it will be ‘no new development’ state. While this
> feature is ‘actively’ in use by many people, the challenges we have in
> accounting mechanisms involved, has made it hard to achieve good
> performance with the feature. Also, the amount of extended attribute
> get/set operations while using the feature is not very ideal. Hence we
> recommend our users to move towards setting quota on backend bricks
> directly (ie, XFS project quota), or to use different volumes for different
> directories etc. As the feature wouldn’t be deprecated immediately, the
> feature doesn’t need a migration plan when you upgrade to newer version,
> but if you are a new user, we wouldn’t recommend setting quota feature. By
> the release dates, we will be publishing our best alternatives guide for
> gluster’s current quota feature. Note that if you want to contribute to the
> feature, we have project quota based issue open
> <https://github.com/gluster/glusterfs/issues/184>[2] Happy to get
> contributions, and help in getting a newer approach to Quota.
> ------------------------------ These are our set of initial features which
> we propose to take out of ‘fully’ supported features. While we are in the
> process of making the user/developer experience of the project much better
> with providing well maintained codebase, we may come up with few more set
> of features which we may possibly consider to move out of support, and
> hence keep watching this space. [1] - http://review.gluster.org/4809
> <http://review.gluster.org/4809> [2] -
> https://github.com/gluster/glusterfs/issues/184
> <https://github.com/gluster/glusterfs/issues/184> Regards, Vijay, Shyam,
> Amar *
>
>
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>

-- 
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20190319/b470edb6/attachment-0001.html>


More information about the maintainers mailing list