[Gluster-users] Keep having unsync entries

Strahil Nikolov hunter86_bg at yahoo.com
Sat Aug 8 06:57:16 UTC 2020


Keep in mind that 4.3 is using Gluster v6 .
I'm on latest 4.3.10 but with Gluster v7.

I was hit by a very rare ACL bug (reported by some other guys here and in oVirt ML) and thus I will recommend you to test functionality after every gluster major upgrade (start,stop,create snapshot,remove snapshot, etc).

In my case 6.6+ and 7.1+ were problematic, but you got no way to skip them.


Best Regards,
Strahil Nikolov








В петък, 7 август 2020 г., 21:34:39 Гринуич+3, carl langlois <crl.langlois at gmail.com> написа: 





Hi Strahil

Thanks for the quick answer. I will try to rsync them manually like you suggested.  
I am still on 4.2.x. I am in the process of moving my cluster to 4.3 but need to move to 4.2.8 first. But moving to 4.2.8 is not an easy task since I need to pin the base os to 7.6 before moving to 4.2.8. 
Hope moving to 4.3 will be easy :-) ... I suspect 4.4 to be a pain to upgrade since there is no upgrade path from 7.8 -> 8 ... :-(
Anyway thanks for the hints.

Regards
Carl


On Fri, Aug 7, 2020 at 2:00 PM Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
> I think Ravi made a change to prevent that in gluster v6.6
> 
> You can  rsync the  2 files from  ovhost1 and run a full heal (I don't know why heal without 'full' doesn't clean up the entries).
> 
> Anyways, ovirt can live without these 2 , but as you don't want to risk any downtimes  - just rsync them from ovhost1 and run a 'gluster volume heal data full'.
> 
> By the way , which version of ovirt do you use ? Gluster v3 was  used  in 4.2.X 
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> На 7 август 2020 г. 20:14:07 GMT+03:00, carl langlois <crl.langlois at gmail.com> написа:
>>Hi all,
>>
>>I am currently upgrading my ovirt cluster and after doing the upgrade
>>on
>>one node i end up having unsync entries that heal by the headl command.
>>My setup is a 2+1  with 4 volume.
>>here is a snapshot of one a volume info
>>Volume Name: data
>>Type: Replicate
>>Volume ID: 71c999a4-b769-471f-8169-a1a66b28f9b0
>>Status: Started
>>Snapshot Count: 0
>>Number of Bricks: 1 x (2 + 1) = 3
>>Transport-type: tcp
>>Bricks:
>>Brick1: ovhost1:/gluster_bricks/data/data
>>Brick2: ovhost2:/gluster_bricks/data/data
>>Brick3: ovhost3:/gluster_bricks/data/data (arbiter)
>>Options Reconfigured:
>>server.allow-insecure: on
>>nfs.disable: on
>>transport.address-family: inet
>>performance.quick-read: off
>>performance.read-ahead: off
>>performance.io-cache: off
>>performance.low-prio-threads: 32
>>network.remote-dio: enable
>>cluster.eager-lock: enable
>>cluster.quorum-type: auto
>>cluster.server-quorum-type: server
>>cluster.data-self-heal-algorithm: full
>>cluster.locking-scheme: granular
>>cluster.shd-max-threads: 8
>>cluster.shd-wait-qlength: 10000
>>features.shard: on
>>user.cifs: off
>>storage.owner-uid: 36
>>storage.owner-gid: 36
>>network.ping-timeout: 30
>>performance.strict-o-direct: on
>>cluster.granular-entry-heal: enable
>>features.shard-block-size: 64MB
>>
>>Also the output of v headl data info
>>
>>gluster> v heal data info
>>Brick ovhost1:/gluster_bricks/data/data
>>/4e59777c-5b7b-4bf1-8463-1c818067955e/dom_md/ids
>>/__DIRECT_IO_TEST__
>>Status: Connected
>>Number of entries: 2
>>
>>Brick ovhost2:/gluster_bricks/data/data
>>Status: Connected
>>Number of entries: 0
>>
>>Brick ovhost3:/gluster_bricks/data/data
>>/4e59777c-5b7b-4bf1-8463-1c818067955e/dom_md/ids
>>/__DIRECT_IO_TEST__
>>Status: Connected
>>Number of entries: 2
>>
>>does not seem to be a split brain also.
>>gluster> v heal data info split-brain
>>Brick ovhost1:/gluster_bricks/data/data
>>Status: Connected
>>Number of entries in split-brain: 0
>>
>>Brick ovhost2:/gluster_bricks/data/data
>>Status: Connected
>>Number of entries in split-brain: 0
>>
>>Brick ovhost3:/gluster_bricks/data/data
>>Status: Connected
>>Number of entries in split-brain: 0
>>
>>not sure how to resolve this issue.
>>gluster version is 3.2.15
>>
>>Regards
>>
>>Carl
> 


More information about the Gluster-users mailing list