<div dir="ltr">Hi Amar,<div><br></div><div>we are also going to start using glusterfs with quotas for home folders. Quotas is one of the main requirements and i'd like to add a +1 to keep the quota feature, as already said maintaining quotas for each brick at Xfs level does not seem really practical.</div><div><br></div><div>thanks</div></div><br><div class="gmail_quote"><div dir="ltr">On Mon, Jul 23, 2018 at 6:38 PM Amar Tumballi <<a href="mailto:atumball@redhat.com">atumball@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jul 23, 2018 at 8:21 PM, Gudrun Mareike Amedick <span dir="ltr"><<a href="mailto:g.amedick@uni-luebeck.de" target="_blank">g.amedick@uni-luebeck.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
we're planning a dispersed volume with at least 50 project directories. Each of those has its own quota ranging between 0.1TB and 200TB. Comparing XFS<br>
project quotas over several servers and bricks to make sure their total matches the desired value doesn't really sound practical. It would probably be<br>
possible to create and maintain 50 volumes and more, but it doesn't seem to be a desirable solution. The quotas aren't fixed and resizing a volume is<br>
not as trivial as changing the quota. <br>
<br>
Quota was in the past and still is a very comfortable way to solve this.<br>
<br>
But what is the new recommended way for such a setting when the quota is going to be deprecated?<br>
<br></blockquote><div><br></div><div>Thanks for the feedback. Helps us to prioritize. Will get back on this.</div><div><br></div><div>-Amar</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Kind regards<br>
<br>
Gudrun<br>
<div><div class="m_-5916248992300294714h5">Am Donnerstag, den 19.07.2018, 12:26 +0530 schrieb Amar Tumballi:<br>
> Hi all,<br>
> <br>
> Over last 12 years of Gluster, we have developed many features, and continue to support most of it till now. But along the way, we have figured out<br>
> better methods of doing things. Also we are not actively maintaining some of these features.<br>
> <br>
> We are now thinking of cleaning up some of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in<br>
> following releases) in next upcoming release, v5.0. The release notes will provide options for smoothly migrating to the supported configurations.<br>
> <br>
> If you are using any of these features, do let us know, so that we can help you with ‘migration’.. Also, we are happy to guide new developers to<br>
> work on those components which are not actively being maintained by current set of developers.<br>
> <br>
> List of features hitting sunset:<br>
> <br>
> ‘cluster/stripe’ translator:<br>
> <br>
> This translator was developed very early in the evolution of GlusterFS, and addressed one of the very common question of Distributed FS, which is<br>
> “What happens if one of my file is bigger than the available brick. Say, I have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it<br>
> solved the purpose, it was very hard to handle failure scenarios, and give a real good experience to our users with this feature. Over the time,<br>
> Gluster solved the problem with it’s ‘Shard’ feature, which solves the problem in much better way, and provides much better solution with existing<br>
> well supported stack. Hence the proposal for Deprecation.<br>
> <br>
> If you are using this feature, then do write to us, as it needs a proper migration from existing volume to a new full supported volume type before<br>
> you upgrade.<br>
> <br>
> ‘storage/bd’ translator:<br>
> <br>
> This feature got into the code base 5 years back with this patch[1]. Plan was to use a block device directly as a brick, which would help to handle<br>
> disk-image storage much easily in glusterfs.<br>
> <br>
> As the feature is not getting more contribution, and we are not seeing any user traction on this, would like to propose for Deprecation.<br>
> <br>
> If you are using the feature, plan to move to a supported gluster volume configuration, and have your setup ‘supported’ before upgrading to your new<br>
> gluster version.<br>
> <br>
> ‘RDMA’ transport support:<br>
> <br>
> Gluster started supporting RDMA while ib-verbs was still new, and very high-end infra around that time were using Infiniband. Engineers did work<br>
> with Mellanox, and got the technology into GlusterFS for better data migration, data copy. While current day kernels support very good speed with<br>
> IPoIB module itself, and there are no more bandwidth for experts in these area to maintain the feature, we recommend migrating over to TCP (IP<br>
> based) network for your volume.<br>
> <br>
> If you are successfully using RDMA transport, do get in touch with us to prioritize the migration plan for your volume. Plan is to work on this<br>
> after the release, so by version 6.0, we will have a cleaner transport code, which just needs to support one type.<br>
> <br>
> ‘Tiering’ feature<br>
> <br>
> Gluster’s tiering feature which was planned to be providing an option to keep your ‘hot’ data in different location than your cold data, so one can<br>
> get better performance. While we saw some users for the feature, it needs much more attention to be completely bug free. At the time, we are not<br>
> having any active maintainers for the feature, and hence suggesting to take it out of the ‘supported’ tag.<br>
> <br>
> If you are willing to take it up, and maintain it, do let us know, and we are happy to assist you.<br>
> <br>
> If you are already using tiering feature, before upgrading, make sure to do gluster volume tier detach all the bricks before upgrading to next<br>
> release. Also, we recommend you to use features like dmcache on your LVM setup to get best performance from bricks.<br>
> <br>
> ‘Quota’<br>
> <br>
> This is a call out for ‘Quota’ feature, to let you all know that it will be ‘no new development’ state. While this feature is ‘actively’ in use by<br>
> many people, the challenges we have in accounting mechanisms involved, has made it hard to achieve good performance with the feature. Also, the<br>
> amount of extended attribute get/set operations while using the feature is not very ideal. Hence we recommend our users to move towards setting<br>
> quota on backend bricks directly (ie, XFS project quota), or to use different volumes for different directories etc.<br>
> <br>
> As the feature wouldn’t be deprecated immediately, the feature doesn’t need a migration plan when you upgrade to newer version, but if you are a new<br>
> user, we wouldn’t recommend setting quota feature. By the release dates, we will be publishing our best alternatives guide for gluster’s current<br>
> quota feature.<br>
> <br>
> Note that if you want to contribute to the feature, we have project quota based issue open[2] Happy to get contributions, and help in getting a<br>
> newer approach to Quota.<br>
> <br>
> <br>
</div></div><span>> These are our set of initial features which we propose to take out of ‘fully’ supported features. While we are in the process of making the<br>
> user/developer experience of the project much better with providing well maintained codebase, we may come up with few more set of features which we<br>
> may possibly consider to move out of support, and hence keep watching this space.<br>
> <br>
> [1] - <a href="http://review.gluster.org/4809" rel="noreferrer" target="_blank">http://review.gluster.org/4809</a><br>
> <br>
> [2] - <a href="https://github.com/gluster/glusterfs/issues/184" rel="noreferrer" target="_blank">https://github.com/gluster/glusterfs/issues/184</a><br>
> <br>
> <br>
> Regards,<br>
> <br>
> Vijay, Shyam, Amar<br>
> <br>
> <br>
</span>> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="m_-5916248992300294714gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Amar Tumballi (amarts)<br></div></div></div></div></div>
</div></div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><span style="display:block;font-size:11.0pt;font-family:Century Gothic;color:#003580"><div style="color:rgb(0,53,128);font-family:Arial,Helvetica,sans-serif;font-weight:bold;font-size:15px"><div>Davide Obbi</div><div style="font-weight:normal;font-size:13px;color:rgb(0,174,239)">System Administrator<br><br></div><div style="font-weight:normal;font-size:13px;color:rgb(102,102,102)">Booking.com B.V.<br>Vijzelstraat 66-80 Amsterdam 1017HL Netherlands</div><div style="font-weight:normal;font-size:13px;color:rgb(102,102,102)"><span style="color:rgb(0,174,239)">Direct </span>+31207031558<br></div><div style="font-weight:normal;font-size:13px;color:rgb(102,102,102)"><div style="font-weight:bold;font-size:16px;color:rgb(0,53,128)"><a href="https://www.booking.com/" style="color:rgb(0,127,255);background-image:initial;background-position:initial;background-repeat:initial" target="_blank"><img src="https://bstatic.com/static/img/siglogo.jpg" alt="Booking.com" title="Booking.com"></a></div><span style="font-size:11px">The world's #1 accommodation site <br>43 languages, 198+ offices worldwide, 120,000+ global destinations, 1,550,000+ room nights booked every day <br>No booking fees, best price always guaranteed <br></span><span style="font-size:11px">Subsidiary of Booking Holdings Inc. (NASDAQ: BKNG)</span><span style="font-size:11px"><br></span></div></div></span></div>