[Gluster-users] On making ctime generator enabled by default in stack
Raghavendra Gowdappa
rgowdapp at redhat.com
Mon Nov 12 04:25:44 UTC 2018
On Sun, Nov 11, 2018 at 11:41 PM Vijay Bellur <vbellur at redhat.com> wrote:
>
>
> On Mon, Nov 5, 2018 at 8:31 PM Raghavendra Gowdappa <rgowdapp at redhat.com>
> wrote:
>
>>
>>
>> On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur <vbellur at redhat.com> wrote:
>>
>>>
>>>
>>> On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa <rgowdapp at redhat.com>
>>> wrote:
>>>
>>>> All,
>>>>
>>>> There is a patch [1] from Kotresh, which makes ctime generator as
>>>> default in stack. Currently ctime generator is being recommended only for
>>>> usecases where ctime is important (like for Elasticsearch). However, a
>>>> reliable (c)(m)time can fix many consistency issues within glusterfs stack
>>>> too. These are issues with caching layers having stale (meta)data
>>>> [2][3][4]. Basically just like applications, components within glusterfs
>>>> stack too need a time to find out which among racing ops (like write, stat,
>>>> etc) has latest (meta)data.
>>>>
>>>> Also note that a consistent (c)(m)time is not an optional feature, but
>>>> instead forms the core of the infrastructure. So, I am proposing to merge
>>>> this patch. If you've any objections, please voice out before Nov 13, 2018
>>>> (a week from today).
>>>>
>>>> As to the existing known issues/limitations with ctime generator, my
>>>> conversations with Kotresh, revealed following:
>>>> * Potential performance degradation (we don't yet have data to
>>>> conclusively prove it, preliminary basic tests from Kotresh didn't indicate
>>>> a significant perf drop).
>>>>
>>>
>>> Do we have this data captured somewhere? If not, would it be possible to
>>> share that data here?
>>>
>>
>> I misquoted Kotresh. He had measured impact of gfid2path and said both
>> features might've similar impact as major perf cost is related to storing
>> xattrs on backend fs. I am in the process of getting a fresh set of
>> numbers. Will post those numbers when available.
>>
>>
>
> I observe that the patch under discussion has been merged now [1]. A quick
> search did not yield me any performance data. Do we have the performance
> numbers posted somewhere?
>
No. Perf benchmarking is a task pending on me.
>
> Thanks,
> Vijay
>
> [1] https://review.gluster.org/#/c/glusterfs/+/21060/
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20181112/224acb88/attachment.html>
More information about the Gluster-users
mailing list