[Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch
Manikandan Selvaganesh
manikandancs333 at gmail.com
Thu Nov 10 16:03:09 UTC 2016
Raghavendra,
No problem. As you said , glusterd_quota_limit_usage invokes the function
which regenerates the conf file. Though I do not remember exactly, to my
understanding when I tried, it did not work properly in my setup. It is
apparently because in the later function where we regenerate the quota.conf
for versions greater than or equal to 3.7, when it is setting a limit or
ideally when you are resetting a limit, it searches for the gfid on which
it needs to set/reset the limit and modify only that to 17 bytes leaving
the remaining ones untouched which again would result in unexpected
behavior. In the case of enable or disable, the entire file gets newly
generated. With this patch, we have done that during an upgrade as well.
Even I am not completely sure. Anyways its better to test and confirm the
fact. I can test the same over the weekend if it's fine.
On Nov 10, 2016 9:00 PM, "Raghavendra G" <raghavendra at gluster.com> wrote:
>
>
> On Thu, Nov 10, 2016 at 8:46 PM, Manikandan Selvaganesh <
> manikandancs333 at gmail.com> wrote:
>
>> Enabling/disabling quota or removing limits are the ways in which
>> quota.conf is regenerated to the later version. It works properly. And as
>> Pranith said, both enabling/disabling takes a lot of time to crawl(though
>> now much faster with enhanced quota enable/disable process) which we cannot
>> suggest the users with a lot of quota configuration. Resetting the limit
>> using limit-usage does not work properly. I have tested the same. The
>> workaround is based on the user setup here. I mean the steps he exactly
>> used in order matters here. The workaround is not so generic.
>>
>
> Thanks Manikandan for the reply :). I've not tested this, but went through
> the code. If I am not wrong, function glusterd_store_quota_config would
> write a quota.conf which is compatible for versions >= 3.7. This function
> is invoked by glusterd_quota_limit_usage unconditionally in success path.
> What am I missing here?
>
> @Pranith,
>
> Since Manikandan says his tests didn't succeed always, probably we should
> do any of the following
> 1. hold back the release till we successfully test limit-usage to rewrite
> quota.conf (I can do this tomorrow)
> 2. get the patch in question for 3.9
> 3. If 1 is failing, debug why 1 is not working and fix that.
>
> regards,
> Raghavendra
>
>> However, quota enable/disable would regenerate the file on any case.
>>
>> IMO, this bug is critical. I am not sure though how often users would hit
>> this - Updating from 3.6 to latest versions. From 3.7 to latest, its fine,
>> this has nothing to do with this patch.
>>
>> On Nov 10, 2016 8:03 PM, "Pranith Kumar Karampuri" <pkarampu at redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 7:43 PM, Raghavendra G <raghavendra at gluster.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Thu, Nov 10, 2016 at 2:14 PM, Pranith Kumar Karampuri <
>>>> pkarampu at redhat.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee <amukherj at redhat.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
>>>>>> pkarampu at redhat.com> wrote:
>>>>>>
>>>>>>> I am trying to understand the criticality of these patches.
>>>>>>> Raghavendra's patch is crucial because gfapi workloads(for samba and qemu)
>>>>>>> are affected severely. I waited for Krutika's patch because VM usecase can
>>>>>>> lead to disk corruption on replace-brick. If you could let us know the
>>>>>>> criticality and we are in agreement that they are this severe, we can
>>>>>>> definitely take them in. Otherwise next release is better IMO. Thoughts?
>>>>>>>
>>>>>>
>>>>>> If you are asking about how critical they are, then the first two are
>>>>>> definitely not but third one is actually a critical one as if user upgrades
>>>>>> from 3.6 to latest with quota enable, further peer probes get rejected and
>>>>>> the only work around is to disable quota and re-enable it back.
>>>>>>
>>>>>
>>>>> Let me take Raghavendra G's input also here.
>>>>>
>>>>> Raghavendra, what do you think we should do? Merge it or live with it
>>>>> till 3.9.1?
>>>>>
>>>>
>>>> The commit says quota.conf is rewritten to compatible version during
>>>> three operations:
>>>> 1. enable/disable quota
>>>>
>>>
>>> This will involve crawling the whole FS doesn't it?
>>>
>>> 2. limit usage
>>>>
>>>
>>> This is a good way IMO. Could Sanoj/you confirm that this works once by
>>> testing it.
>>>
>>>
>>>> 3. remove quota limit
>>>>
>>>
>>> I guess you added this for completeness. We can't really suggest this to
>>> users as a work around.
>>>
>>>
>>>>
>>>> I checked the code and it works as stated in commit msg. Probably we
>>>> can list the above three operations as work around and take this patch in
>>>> for 3.9.1
>>>>
>>>
>>>>
>>>>>
>>>>>>
>>>>>> On a different note, 3.9 head is not static and moving forward. So if
>>>>>> you are really looking at only critical patches need to go in, that's not
>>>>>> happening, just a word of caution!
>>>>>>
>>>>>>
>>>>>>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee <
>>>>>>> amukherj at redhat.com> wrote:
>>>>>>>
>>>>>>>> Pranith,
>>>>>>>>
>>>>>>>> I'd like to see following patches getting in:
>>>>>>>>
>>>>>>>> http://review.gluster.org/#/c/15722/
>>>>>>>> http://review.gluster.org/#/c/15714/
>>>>>>>> http://review.gluster.org/#/c/15792/
>>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
>>>>>>>> pkarampu at redhat.com> wrote:
>>>>>>>>
>>>>>>>>> hi,
>>>>>>>>> The only problem left was EC taking more time. This should
>>>>>>>>> affect small files a lot more. Best way to solve it is using compound-fops.
>>>>>>>>> So for now I think going ahead with the release is best.
>>>>>>>>>
>>>>>>>>> We are waiting for Raghavendra Talur's
>>>>>>>>> http://review.gluster.org/#/c/15778 before going ahead with the
>>>>>>>>> release. If we missed any other crucial patch please let us know.
>>>>>>>>>
>>>>>>>>> Will make the release as soon as this patch is merged.
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Pranith & Aravinda
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> maintainers mailing list
>>>>>>>>> maintainers at gluster.org
>>>>>>>>> http://www.gluster.org/mailman/listinfo/maintainers
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> ~ Atin (atinm)
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Pranith
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> ~ Atin (atinm)
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Pranith
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-devel mailing list
>>>>> Gluster-devel at gluster.org
>>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Raghavendra G
>>>>
>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>> _______________________________________________
>>> maintainers mailing list
>>> maintainers at gluster.org
>>> http://www.gluster.org/mailman/listinfo/maintainers
>>>
>>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Raghavendra G
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20161110/37a99d0c/attachment-0001.html>
More information about the Gluster-devel
mailing list