[Gluster-users] qemu raw image file - qemu and grub2 can't find boot content from VM

Strahil Nikolov hunter86_bg at yahoo.com
Thu Jan 28 04:23:00 UTC 2021


I mean to have 50 * 100GB qemu images as disks for the same VM and each virtual disk to be a PV for the VG of that big VM.
Best Regards,Strahil Nikolov

Sent from Yahoo Mail on Android 
 
  On Wed, Jan 27, 2021 at 16:28, Erik Jacobson<erik.jacobson at hpe.com> wrote:   > > Shortly after the sharded volume is made, there are some fuse mount
> > messages. I'm not 100% sure if this was just before or during the
> > big qemu-img command to make the 5T image
> > (qemu-img create -f raw -o preallocation=falloc
> > /adminvm/images/adminvm.img 5T)
> Any reason to have a single disk with this size ?

> Usually in any
> virtualization I have used , it is always recommended to keep it lower.
> Have you thought about multiple disks with smaller size ?

Yes, because the actual virtual machine is an admin node/head node cluster
manager for a supercomputer that hosts big OS images and drives
multi-thousand-node-clusters (boot, monitoring, image creation,
distribution, sometimes NFS roots, etc) . So this VM is a biggie.

We could make multiple smaller images but it would be very painful since
it differs from the normal non-VM setup.

So unlike many solutions where you have lots of small VMs with their
images small images, this solution is one giant VM with one giant image.
We're essentially using gluster in this use case (as opposed to others I
have posted about in the past) for head node failover (combined with
pacemaker).

> Also worth
> noting is that RHII is supported only when the shard size is  512MB, so
> it's worth trying bigger shard size .

I have put larger shard size and newer gluster version on the list to
try. Thank you! Hoping to get it failing again to try these things!
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210128/42c83da5/attachment.html>


More information about the Gluster-users mailing list