[Gluster-Maintainers] Release 5: Missing option documentation (need inputs)

Krutika Dhananjay kdhananj at redhat.com
Thu Oct 11 15:47:15 UTC 2018


On Wed, Oct 10, 2018 at 8:30 PM Shyam Ranganathan <srangana at redhat.com>
wrote:

> The following options were added post 4.1 and are part of 5.0 as the
> first release for the same. They were added in as part of bugs, and
> hence looking at github issues to track them as enhancements did not
> catch the same.
>
> We need to document it in the release notes (and also the gluster doc.
> site ideally), and hence I would like a some details on what to write
> for the same (or release notes commits) for them.
>
> Option: cluster.daemon-log-level
> Attention: @atin
> Review: https://review.gluster.org/c/glusterfs/+/20442
>
> Option: ctime-invalidation
> Attention: @Du
> Review: https://review.gluster.org/c/glusterfs/+/20286
>
> Option: shard-lru-limit
> Attention: @krutika
> Review: https://review.gluster.org/c/glusterfs/+/20544


I added this option solely to make it easier to hit shard's in-memory lru
limit and enable testing of different cases that arise when the limit is
reached.
For this reason, this option is also marked "NO_DOC" in the code. So we
don't need to document it in the release notes.


>
> Option: shard-deletion-rate
> Attention: @krutika
> Review: https://review.gluster.org/c/glusterfs/+/19970
>
> Please send in the required text ASAP, as we are almost towards the end
>
of the release.
>

This option is used to configure the number of shards to delete in parallel
when the original file is deleted. The default value is 100. But it can
always be increased to delete more shards in parallel for faster freeing up
of space. The upper limit is yet to be fixed.  But use it with caution as a
very large number will cause serious lock contention issues on the bricks
(in locks translator). As an example, in our testing, an upper limit of
125000 was enough to cause timeouts and hangs in the gluster processes due
to lock contention.

-Krutika


> Thanks,
> Shyam
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20181011/2b17567e/attachment-0001.html>


More information about the maintainers mailing list