[Gluster-users] Problem with qemu-img convert between gluster storages
Dominic Jäger
d.jaeger at proxmox.com
Thu May 7 10:54:13 UTC 2020
Dear Gluster users,
I am currently testing accessing Gluster storages in combination with Proxmox
VE 6.1 and have some problems with the QEMU integration.
Proxmox VE 6.1 is based on Debian Buster and contains the package
glusterfs-client 5.5-3. The problem has appeared with this version too, but I
upgraded the package to 7.5-1 for the following tests.
On a Proxmox VE host (=client for the tests), I set up two virtual machines
running Proxmox VE as well and installed Gluster 7.5-1 in them as simulation of
a hyper-converged setup (details at the end of the mail).
Locally, I have a file source.raw
# du -h source.raw
801M source.raw
and from here I can successfully create empty images on the gluster storages.
# qemu-img create -f raw gluster://192.168.25.135/gv0/test.raw 1G
# qemu-img create -f raw gluster://192.168.25.135/gv1/test.raw 1G
# qemu-img create -f raw gluster://192.168.25.136/gv0/test.raw 1G
# qemu-img create -f raw gluster://192.168.25.136/gv1/test.raw 1G
As expected, running du -h on the Gluster servers shows size 0.
Copying the local file source.raw works to every Gluster storage, for example
# qemu-img convert -p -n -f raw -O raw source.raw gluster://192.168.25.136/gv1/test.raw
and du -h shows 800M size as expected.
However, copying between different Gluster storages does not always work. The
exact commands I've tried look like this
# qemu-img convert -p -n -f raw -O raw gluster://192.168.25.135/gv0/test.raw gluster://192.168.25.135/gv1/test.raw
The progress bar of the QEMU command goes up to 100% and the return value of
the command is 0. The file size of the target test.raw file remains 0, however.
Trying to investigate this, I copied the local source.raw to one volume (first
qemu-img convert from above) and from there to other volumes (variations of
second qemu-img convert).
In addition to the Proxmox VE client, I did a few tests using a Fedora 32
machine client with glusterfs=7.5 and qemu-img=4.2 (default). Unfortunately, I
have not been able to identify a pattern yet.
source ... This server got the local source.raw (first qemu-img convert)
no ... file size remained zero => failure
yes ... file size became 800M => success
Server1 Server2
-------------------
gv0 | source no
gv1 | yes yes
------------------- qemu-img create again
gv0 | no source
gv1 | yes yes
------------------- Reboot everything & qemu-img create again
gv0 | source no
gv1 | no no
------------------- qemu-img create again
gv0 | yes yes
gv1 | no source
------------------- qemu-img create again, Fedora client
gv0 | yes yes
gv1 | source no
------------------- qemu-img create again, Fedora client
gv0 | yes no
gv1 | yes source
Strange side note: In the Fedora tests the file size for "yes" became 1.0G and
the initial copy of the source file gets size 1.0G after some converts, too.
In this state running a virtual machine in Proxmox VE on a Gluster volume is
still possible and even high availability and live migration features remain
functional. However, storage migration scenarios are severely affected. For
example, building a second Gluster storage on new hardware and moving virtual
machines to it seems to be unreliable in the current situation.
I am not a heavy Gluster user. Have I missed something or messed up the setup?
Do you have any advice on where to continue looking for the problem? Some more
details about the setup are at the end of the mail.
Best
Dominic
# dpkg -l | grep gluster
ii glusterfs-client 7.5-1 amd64 clustered file-system (client package)
ii glusterfs-common 7.5-1 amd64 GlusterFS common libraries and translator modules
ii glusterfs-server 7.5-1 amd64 clustered file-system (server package)
ii libglusterfs-dev 7.5-1 amd64 Development files for GlusterFS libraries
ii libglusterfs0:amd64 7.5-1 amd64 GlusterFS shared library
# xfs_info /data/brick1
meta-data=/dev/sdb isize=512 agcount=4, agsize=13762560 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=0
data = bsize=4096 blocks=55050240, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=26880, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
# gluster volume info all
Volume Name: gv0
Type: Distribute
Volume ID: e40e39b3-1853-4049-9c38-939df9f8e00d
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.25.135:/data/brick1/gv0
Options Reconfigured:
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
Volume Name: gv1
Type: Distribute
Volume ID: 088cb777-fc2a-4488-bc7c-9e9f3db0ce69
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.25.135:/data/brick2/gv1
Options Reconfigured:
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
# gluster volume status all
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.25.135:/data/brick1/gv0 49152 0 Y 851
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: gv1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.25.135:/data/brick2/gv1 49153 0 Y 873
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
# gluster volume status all detail
Status of volume: gv0
------------------------------------------------------------------------------
Brick : Brick 192.168.25.135:/data/brick1/gv0
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 851
File System : xfs
Device : /dev/sdb
Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
Inode Size : N/A
Disk Space Free : 209.7GB
Total Disk Space : 209.9GB
Inode Count : 110100480
Free Inodes : 110100452
Status of volume: gv1
------------------------------------------------------------------------------
Brick : Brick 192.168.25.135:/data/brick2/gv1
TCP Port : 49153
RDMA Port : 0
Online : Y
Pid : 873
File System : xfs
Device : /dev/sdc
Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
Inode Size : N/A
Disk Space Free : 199.7GB
Total Disk Space : 199.9GB
Inode Count : 104857600
Free Inodes : 104857579
More information about the Gluster-users
mailing list