[Gluster-Maintainers] Fwd: [Gluster-users] performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs

Amar Tumballi atumball at redhat.com
Wed Apr 18 12:45:43 UTC 2018


FYI. This is a good example for the need to have the 'DocApproved' and
'SpecApproved' flags. Lets get more serious about our docs for features IMO.

-Amar


---------- Forwarded message ----------
From: Artem Russakovskii <archon810 at gmail.com>
Date: Wed, Apr 18, 2018 at 12:23 PM
Subject: Re: [Gluster-users] performance.cache-size for high-RAM
clients/servers, other tweaks for performance, and improvements to Gluster
docs


OK, thank you. I'll try that.

The reason I was confused about its status is these things in the doc:

How To Test
> TBD.
> Documentation
> TBD
> Status
> Design complete. Implementation done. The only thing pending is the
> compounding of two fops in shd code.



Sincerely,
Artem

--
Founder, Android Police <http://www.androidpolice.com>, APK Mirror
<http://www.apkmirror.com/>, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
<https://plus.google.com/+ArtemRussakovskii> | @ArtemR
<http://twitter.com/ArtemR>

On Tue, Apr 17, 2018 at 11:49 PM, Ravishankar N <ravishankar at redhat.com>
wrote:

>
>
> On 04/18/2018 11:59 AM, Artem Russakovskii wrote:
>
> Btw, I've now noticed at least 5 variations in toggling binary option
> values. Are they all interchangeable, or will using the wrong value not
> work in some cases?
>
> yes/no
> true/false
> True/False
> on/off
> enable/disable
>
> It's quite a confusing/inconsistent practice, especially given that many
> options will accept any value without erroring out/validation.
>
>
> All these options are okay.
>
>
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
> <http://www.apkmirror.com/>, Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
> <http://twitter.com/ArtemR>
>
> On Tue, Apr 17, 2018 at 11:22 PM, Artem Russakovskii <archon810 at gmail.com>
> wrote:
>
>> Thanks for the link. Looking at the status of that doc, it isn't quite
>> ready yet, and there's no mention of the option.
>>
>
> No, this is a completed feature available since 3.8 IIRC. You can use it
> safely. There is a difference in how to enable it though. Instead of using
> 'gluster volume set ...', you need to use 'gluster volume heal <volname>
> granular-entry-heal enable' to turn it on. If there are no pending heals,
> it will run successfully. Otherwise you need to wait until heals are over
> (i.e. heal info shows zero entries). Just follow what the CLI says and you
> should be fine.
>
> -Ravi
>
>
>> Does it mean that whatever is ready now in 4.0.1 is incomplete but can be
>> enabled via granular-entry-heal=on, and when it is complete, it'll become
>> the default and the flag will simply go away?
>>
>> Is there any risk enabling the option now in 4.0.1?
>>
>>
>> Sincerely,
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20180418/c4be67e3/attachment.html>


More information about the maintainers mailing list