<div dir="ltr">Hi Strahil<div><br></div><div>Thanks for the quick answer. I will try to rsync them manually like you suggested. </div><div>I am still on 4.2.x. I am in the process of moving my cluster to 4.3 but need to move to 4.2.8 first. But moving to 4.2.8 is not an easy task since I need to pin the base os to 7.6 before moving to 4.2.8. </div><div>Hope moving to 4.3 will be easy :-) ... I suspect 4.4 to be a pain to upgrade since there is no upgrade path from 7.8 -> 8 ... :-(</div><div>Anyway thanks for the hints.</div><div><br></div><div>Regards</div><div>Carl</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Aug 7, 2020 at 2:00 PM Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I think Ravi made a change to prevent that in gluster v6.6<br>
<br>
You can rsync the 2 files from ovhost1 and run a full heal (I don't know why heal without 'full' doesn't clean up the entries).<br>
<br>
Anyways, ovirt can live without these 2 , but as you don't want to risk any downtimes - just rsync them from ovhost1 and run a 'gluster volume heal data full'.<br>
<br>
By the way , which version of ovirt do you use ? Gluster v3 was used in 4.2.X <br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
<br>
<br>
<br>
На 7 август 2020 г. 20:14:07 GMT+03:00, carl langlois <<a href="mailto:crl.langlois@gmail.com" target="_blank">crl.langlois@gmail.com</a>> написа:<br>
>Hi all,<br>
><br>
>I am currently upgrading my ovirt cluster and after doing the upgrade<br>
>on<br>
>one node i end up having unsync entries that heal by the headl command.<br>
>My setup is a 2+1 with 4 volume.<br>
>here is a snapshot of one a volume info<br>
>Volume Name: data<br>
>Type: Replicate<br>
>Volume ID: 71c999a4-b769-471f-8169-a1a66b28f9b0<br>
>Status: Started<br>
>Snapshot Count: 0<br>
>Number of Bricks: 1 x (2 + 1) = 3<br>
>Transport-type: tcp<br>
>Bricks:<br>
>Brick1: ovhost1:/gluster_bricks/data/data<br>
>Brick2: ovhost2:/gluster_bricks/data/data<br>
>Brick3: ovhost3:/gluster_bricks/data/data (arbiter)<br>
>Options Reconfigured:<br>
>server.allow-insecure: on<br>
>nfs.disable: on<br>
>transport.address-family: inet<br>
>performance.quick-read: off<br>
>performance.read-ahead: off<br>
>performance.io-cache: off<br>
>performance.low-prio-threads: 32<br>
>network.remote-dio: enable<br>
>cluster.eager-lock: enable<br>
>cluster.quorum-type: auto<br>
>cluster.server-quorum-type: server<br>
>cluster.data-self-heal-algorithm: full<br>
>cluster.locking-scheme: granular<br>
>cluster.shd-max-threads: 8<br>
>cluster.shd-wait-qlength: 10000<br>
>features.shard: on<br>
>user.cifs: off<br>
>storage.owner-uid: 36<br>
>storage.owner-gid: 36<br>
>network.ping-timeout: 30<br>
>performance.strict-o-direct: on<br>
>cluster.granular-entry-heal: enable<br>
>features.shard-block-size: 64MB<br>
><br>
>Also the output of v headl data info<br>
><br>
>gluster> v heal data info<br>
>Brick ovhost1:/gluster_bricks/data/data<br>
>/4e59777c-5b7b-4bf1-8463-1c818067955e/dom_md/ids<br>
>/__DIRECT_IO_TEST__<br>
>Status: Connected<br>
>Number of entries: 2<br>
><br>
>Brick ovhost2:/gluster_bricks/data/data<br>
>Status: Connected<br>
>Number of entries: 0<br>
><br>
>Brick ovhost3:/gluster_bricks/data/data<br>
>/4e59777c-5b7b-4bf1-8463-1c818067955e/dom_md/ids<br>
>/__DIRECT_IO_TEST__<br>
>Status: Connected<br>
>Number of entries: 2<br>
><br>
>does not seem to be a split brain also.<br>
>gluster> v heal data info split-brain<br>
>Brick ovhost1:/gluster_bricks/data/data<br>
>Status: Connected<br>
>Number of entries in split-brain: 0<br>
><br>
>Brick ovhost2:/gluster_bricks/data/data<br>
>Status: Connected<br>
>Number of entries in split-brain: 0<br>
><br>
>Brick ovhost3:/gluster_bricks/data/data<br>
>Status: Connected<br>
>Number of entries in split-brain: 0<br>
><br>
>not sure how to resolve this issue.<br>
>gluster version is 3.2.15<br>
><br>
>Regards<br>
><br>
>Carl<br>
</blockquote></div>