[Gluster-users] New 3.7.13 settings

Krutika Dhananjay kdhananj at redhat.com
Sun Jul 24 02:59:19 UTC 2016


The option is useful in preventing spurious heals from being reported in
`volume heal info` output.

-Krutika

On Sat, Jul 23, 2016 at 10:05 PM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:

> 3.7.13 has been running well now for several weeks for me on a rep 3
> sharded volume, VM hosting, but I'm still on op-version 30710, after the
> issues with 12 have held off making any changes until confidence was
> restored :)
>
>
> Scrolling through the code revealed the following for 3.7.13
>
>
> *cluster.shd-max-threads*
> Default Value: 1
> Description: Maximum number of threads SHD can use per local brick.  This
> can substantially lower heal times, but can also crush your bricks if you
> don't have the storage hardware to support this.
>
> *cluster.shd-wait-qlength*
> Default Value: 1024
> Description: This option can be used to control number of heals that can
> wait in SHD per subvolume
>
> *cluster.locking-scheme*
> Default Value: full
> Description: If this option is set to granular, self-heal will stop being
> compatible with afr-v1, which helps afr be more granular while self-healing
>
> The first two are I believe, to do with improving heal performance.
> However I'm quite happy with the existing defaults and performance, so no
> need to tweak them.
>
>
> But I'm not sure as setting cluster.locking-scheme to"granular" will
> achieve - I seem to recall that it reduces the locks needs to establish
> whats needs to be healed? improves speed of "heal info"?
>
>
> thanks,
>
> --
> Lindsay Mathieson
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160724/f1cf1a66/attachment.html>


More information about the Gluster-users mailing list