[Gluster-devel] [Gluster-users] On making ctime generator enabled by default in stack
Raghavendra Gowdappa
rgowdapp at redhat.com
Wed Jan 2 08:00:20 UTC 2019
On Mon, Nov 12, 2018 at 10:48 AM Amar Tumballi <atumball at redhat.com> wrote:
>
>
> On Mon, Nov 12, 2018 at 10:39 AM Vijay Bellur <vbellur at redhat.com> wrote:
>
>>
>>
>> On Sun, Nov 11, 2018 at 8:25 PM Raghavendra Gowdappa <rgowdapp at redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Sun, Nov 11, 2018 at 11:41 PM Vijay Bellur <vbellur at redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Mon, Nov 5, 2018 at 8:31 PM Raghavendra Gowdappa <
>>>> rgowdapp at redhat.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur <vbellur at redhat.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa <
>>>>>> rgowdapp at redhat.com> wrote:
>>>>>>
>>>>>>> All,
>>>>>>>
>>>>>>> There is a patch [1] from Kotresh, which makes ctime generator as
>>>>>>> default in stack. Currently ctime generator is being recommended only for
>>>>>>> usecases where ctime is important (like for Elasticsearch). However, a
>>>>>>> reliable (c)(m)time can fix many consistency issues within glusterfs stack
>>>>>>> too. These are issues with caching layers having stale (meta)data
>>>>>>> [2][3][4]. Basically just like applications, components within glusterfs
>>>>>>> stack too need a time to find out which among racing ops (like write, stat,
>>>>>>> etc) has latest (meta)data.
>>>>>>>
>>>>>>> Also note that a consistent (c)(m)time is not an optional feature,
>>>>>>> but instead forms the core of the infrastructure. So, I am proposing to
>>>>>>> merge this patch. If you've any objections, please voice out before Nov 13,
>>>>>>> 2018 (a week from today).
>>>>>>>
>>>>>>> As to the existing known issues/limitations with ctime generator, my
>>>>>>> conversations with Kotresh, revealed following:
>>>>>>> * Potential performance degradation (we don't yet have data to
>>>>>>> conclusively prove it, preliminary basic tests from Kotresh didn't indicate
>>>>>>> a significant perf drop).
>>>>>>>
>>>>>>
>>>>>> Do we have this data captured somewhere? If not, would it be possible
>>>>>> to share that data here?
>>>>>>
>>>>>
>>>>> I misquoted Kotresh. He had measured impact of gfid2path and said both
>>>>> features might've similar impact as major perf cost is related to storing
>>>>> xattrs on backend fs. I am in the process of getting a fresh set of
>>>>> numbers. Will post those numbers when available.
>>>>>
>>>>>
>>>>
>>>> I observe that the patch under discussion has been merged now [1]. A
>>>> quick search did not yield me any performance data. Do we have the
>>>> performance numbers posted somewhere?
>>>>
>>>
>>> No. Perf benchmarking is a task pending on me.
>>>
>>
>> When can we expect this task to be complete?
>>
>> In any case, I don't think it is ideal for us to merge a patch without
>> completing our due diligence on it. How do we want to handle this scenario
>> since the patch is already merged?
>>
>> We could:
>>
>> 1. Revert the patch now
>> 2. Review the performance data and revert the patch if performance
>> characterization indicates a significant dip. It would be preferable to
>> complete this activity before we branch off for the next release.
>>
>
> I am for option 2. Considering the branch out for next release is another
> 2 months, and no one is expected to use the 'release' off a master branch
> yet, it makes sense to give that buffer time to get this activity completed.
>
Its unlikely I'll have time for carrying out perf benchmark. Hence I've
posted a revert here: https://review.gluster.org/#/c/glusterfs/+/21975/
> Regards,
> Amar
>
> 3. Think of some other option?
>>
>> Thanks,
>> Vijay
>>
>>
>>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Amar Tumballi (amarts)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190102/a3afe013/attachment.html>
More information about the Gluster-devel
mailing list