[Gluster-users] Keep having unsync entries

Strahil Nikolov hunter86_bg at yahoo.com
Fri Aug 7 18:00:15 UTC 2020


I think Ravi made a change to prevent that in gluster v6.6

You can  rsync the  2 files from  ovhost1 and run a full heal (I don't know why heal without 'full' doesn't clean up the entries).

Anyways, ovirt can live without these 2 , but as you don't want to risk any downtimes  - just rsync them from ovhost1 and run a 'gluster volume heal data full'.

By the way , which version of ovirt do you use ? Gluster v3 was  used  in 4.2.X 

Best Regards,
Strahil Nikolov



На 7 август 2020 г. 20:14:07 GMT+03:00, carl langlois <crl.langlois at gmail.com> написа:
>Hi all,
>
>I am currently upgrading my ovirt cluster and after doing the upgrade
>on
>one node i end up having unsync entries that heal by the headl command.
>My setup is a 2+1  with 4 volume.
>here is a snapshot of one a volume info
>Volume Name: data
>Type: Replicate
>Volume ID: 71c999a4-b769-471f-8169-a1a66b28f9b0
>Status: Started
>Snapshot Count: 0
>Number of Bricks: 1 x (2 + 1) = 3
>Transport-type: tcp
>Bricks:
>Brick1: ovhost1:/gluster_bricks/data/data
>Brick2: ovhost2:/gluster_bricks/data/data
>Brick3: ovhost3:/gluster_bricks/data/data (arbiter)
>Options Reconfigured:
>server.allow-insecure: on
>nfs.disable: on
>transport.address-family: inet
>performance.quick-read: off
>performance.read-ahead: off
>performance.io-cache: off
>performance.low-prio-threads: 32
>network.remote-dio: enable
>cluster.eager-lock: enable
>cluster.quorum-type: auto
>cluster.server-quorum-type: server
>cluster.data-self-heal-algorithm: full
>cluster.locking-scheme: granular
>cluster.shd-max-threads: 8
>cluster.shd-wait-qlength: 10000
>features.shard: on
>user.cifs: off
>storage.owner-uid: 36
>storage.owner-gid: 36
>network.ping-timeout: 30
>performance.strict-o-direct: on
>cluster.granular-entry-heal: enable
>features.shard-block-size: 64MB
>
>Also the output of v headl data info
>
>gluster> v heal data info
>Brick ovhost1:/gluster_bricks/data/data
>/4e59777c-5b7b-4bf1-8463-1c818067955e/dom_md/ids
>/__DIRECT_IO_TEST__
>Status: Connected
>Number of entries: 2
>
>Brick ovhost2:/gluster_bricks/data/data
>Status: Connected
>Number of entries: 0
>
>Brick ovhost3:/gluster_bricks/data/data
>/4e59777c-5b7b-4bf1-8463-1c818067955e/dom_md/ids
>/__DIRECT_IO_TEST__
>Status: Connected
>Number of entries: 2
>
>does not seem to be a split brain also.
>gluster> v heal data info split-brain
>Brick ovhost1:/gluster_bricks/data/data
>Status: Connected
>Number of entries in split-brain: 0
>
>Brick ovhost2:/gluster_bricks/data/data
>Status: Connected
>Number of entries in split-brain: 0
>
>Brick ovhost3:/gluster_bricks/data/data
>Status: Connected
>Number of entries in split-brain: 0
>
>not sure how to resolve this issue.
>gluster version is 3.2.15
>
>Regards
>
>Carl


More information about the Gluster-users mailing list