[Gluster-users] qemu raw image file - qemu and grub2 can't find boot content from VM

Strahil Nikolov hunter86_bg at yahoo.com
Wed Jan 27 04:16:45 UTC 2021


Are you sure that there is no heals pending at the time of the power up
?
> 
> 
> nano-1:/adminvm/images # gluster volume info
> 
> Volume Name: adminvm
> Type: Replicate
> Volume ID: 67de902c-8c00-4dc9-8b69-60b93b5f6104
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 172.23.255.151:/data/brick_adminvm
> Brick2: 172.23.255.152:/data/brick_adminvm
> Brick3: 172.23.255.153:/data/brick_adminvm
> Options Reconfigured:
> performance.client-io-threads: on
> nfs.disable: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.low-prio-threads: 32
> network.remote-dio: enable
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 10000
> features.shard: on
> user.cifs: off
> cluster.choose-local: off
> client.event-threads: 4
> server.event-threads: 4
> cluster.granular-entry-heal: enable
> storage.owner-uid: 439
> storage.owner-gid: 443
> 
I checked my oVirt-based gluster and the only difference is:
cluster.gra
nular-entry-heal: enable
The options seem fine.
> 
> 
> libglusterfs0-7.2-4723.1520.210122T1700.a.sles15sp2hpe.x86_64
> glusterfs-7.2-4723.1520.210122T1700.a.sles15sp2hpe.x86_64
> python3-gluster-7.2-4723.1520.210122T1700.a.sles15sp2hpe.noarch
This one is quite old although it never caused any troubles with my
oVirt VMs. Either try with latest v7 or even v8.3 .

Best Regards,
Strahil Nikolov



More information about the Gluster-users mailing list