[Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Sahina Bose
sabose at redhat.com
Wed Jul 19 14:32:47 UTC 2017
[Adding gluster-users]
On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com> wrote:
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3 full
> replicated node . This cluster have 2 gluster volume:
>
> - data: volume for the Data (Master) Domain (For vm)
> - engine: volume fro the hosted_storage Domain (for hosted engine)
>
> We have this problem: "engine" gluster volume have always unsynced
> elements and we cant' fix the problem, on command line we have tried to use
> the "heal" command but elements remain always unsynced ....
>
> Below the heal command "status":
>
> [root at node01 ~]# gluster volume heal engine info
> Brick node01:/gluster/engine/brick
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.2
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-
> 52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-
> a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.61
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.1
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.20
> /__DIRECT_IO_TEST__
> Status: Connected
> Number of entries: 12
>
> Brick node02:/gluster/engine/brick
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-
> 52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01
> <gfid:9a601373-bbaa-44d8-b396-f0b9b12c026f>
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids
> <gfid:1e309376-c62e-424f-9857-f9a0c3a729bf>
> <gfid:e3565b50-1495-4e5b-ae88-3bceca47b7d9>
> <gfid:4e33ac33-dddb-4e29-b4a3-51770b81166a>
> /__DIRECT_IO_TEST__
> <gfid:67606789-1f34-4c15-86b8-c0d05b07f187>
> <gfid:9ef88647-cfe6-4a35-a38c-a5173c9e8fc0>
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-
> a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6
> <gfid:9ad720b2-507d-4830-8294-ec8adee6d384>
> <gfid:d9853e5d-a2bf-4cee-8b39-7781a98033cf>
> Status: Connected
> Number of entries: 12
>
> Brick node04:/gluster/engine/brick
> Status: Connected
> Number of entries: 0
>
>
>
> running the "gluster volume heal engine" don't solve the problem...
>
> Some extra info:
>
> We have recently changed the gluster from: 2 (full repliacated) + 1
> arbiter to 3 full replicated cluster but i don't know this is the problem...
>
> The "data" volume is good and healty and have no unsynced entry.
>
> Ovirt refuse to put the node02 and node01 in "maintenance mode" and
> complains about "unsynced elements"
>
> How can I fix this?
> Thank you
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170719/6b73e70a/attachment.html>
More information about the Gluster-users
mailing list