[Gluster-devel] glfs-fini issue in upstream master

RAGHAVENDRA TALUR raghavendra.talur at gmail.com
Fri Mar 6 08:29:38 UTC 2015


On Thu, Mar 5, 2015 at 10:23 PM, Ravishankar N <ravishankar at redhat.com>
wrote:

>  tests/basic/afr/split-brain-healing.t is failing in upstream master:
>
>
> ------------------------------------------------------------------------------------------------------------
> ok 52
> ok 53
> *glfsheal: quick-read.c:1052: qr_inode_table_destroy: Assertion
> `list_empty (&priv->table.lru[i])' failed.*
> Healing /file1 failed: File not in split-brain.
> *n**ot ok 54 Got "0" instead of "1"*
> FAILED COMMAND: 1 echo 0
> *glfsheal: quick-read.c:1052: qr_inode_table_destroy: Assertion
> `list_empty (&priv->table.lru[i])' failed.*
> Healing /file3 failed: File not in split-brain.
> *not ok 55 Got "0" instead of "1"*
> FAILED COMMAND: 1 echo 0
> /root/workspace/glusterfs
> ok 56
> Failed 2/56 subtests
>
> ------------------------------------------------------------------------------------------------------------
>
>
> If I comment the calls to glfs_fini() in glfs-heal.c, the test passes.
>
> ------------------------------------------------------------------------------------------------------------
>
> ok 52
> ok 53
> Healing /file1 failed: File not in split-brain.
> Volume heal failed.
> ok 54
> Healing /file3 failed: File not in split-brain.
> Volume heal failed.
> ok 55
> /root/workspace/glusterfs
> ok 56
>
>
> ------------------------------------------------------------------------------------------------------------
>

>
> Help!
>
>
I think this is a issue similar to what Poornima fixed in io-cache xlator.
Refer:
http://review.gluster.org/#/c/7642/25/xlators/performance/io-cache/src/io-cache.c

Adding Poornima.


>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>


-- 
*Raghavendra Talur *
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150306/5994f063/attachment-0001.html>


More information about the Gluster-devel mailing list