[Gluster-users] AFR Version used for self-heal
Kyle Maas
kyle at virtualinterconnect.com
Fri Feb 26 04:40:28 UTC 2016
On 02/25/2016 08:25 PM, Krutika Dhananjay wrote:
>
>
> ------------------------------------------------------------------------
>
> *From: *"Kyle Maas" <kyle at virtualinterconnect.com>
> *To: *gluster-users at gluster.org
> *Sent: *Thursday, February 25, 2016 11:36:53 PM
> *Subject: *[Gluster-users] AFR Version used for self-heal
>
> How can I tell what AFR version a cluster is using for self-heal?
>
> The reason I ask is that I have a two-node replicated 3.7.8
> cluster (no
> arbiters) which has locking behavior during self-heal which looks very
> similar to that of AFRv1 (only heals one file at a time per self-heal
> daemon, appears to lock the full inode while it's healing it
> instead of
> just ranges, etc.), but I don't know how I would check the version to
> confirm that suspicion. I've seen mention of needing to explicitly
> enable AFRv2 when upgrading Gluster from an older version, and this
> cluster was one that started at an older Gluster version and was
> upgraded in accordance with the upgrade docs, but I cannot seem to
> find
> any documentation on enabling newer versions of AFR or even checking
> which one I'm running at. cluster.op-version for this cluster is
> currently at 30603, and both nodes are CentOS 7 running Gluster 3.7.8.
>
> Any help would be appreciated. Thanks!
>
>
> So if you bring one of the replicas down, create a file, and check its
> extended attributes from the backend (`getfattr -d -m . -e hex
> <path-to-the-file-from-the-brick-path-that-is-online`),
> do you see this appearing in the list:
>
> trusted.afr.dirty=0x000000000000000000000000
>
The cluster I'm having problems with has been around for quite a while
and is very actively used. When this cluster gets out of sync, a
self-heal can take up to 24 hours due to the amount of write activity.
Any test or check which requires triggering a self-heal is difficult for
me to do, so I suspect I will need to try to spin up a new cluster as a
small-scale duplicate of the one I'm having problems with to be able to
try this. As a result, I will need to get back to you with the results
of this experiment.
Warm Regards,
Kyle Maas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160225/4dba1b25/attachment.html>
More information about the Gluster-users
mailing list