[Gluster-users] KVM guest I/O errors with xfs backed gluster volumes

Bryan Whitehead driver at megahappy.net
Tue Jul 16 18:00:35 UTC 2013


No I've never used raw, I've used lvm (local block device) and qcow2.
I think you should use the libvirt tools to run VM's and not directly
use qemu-kvm.

Are you creating the qcow2 file with qemu-img first? example:
qemu-img create -f qcow2 /var/lib/libvirt/images/xfs/kvm2.img 200G

[root@ ~]# virsh pool-dumpxml d34c701f-275c-49d1-92f7-a952d7d5e967
<pool type='dir'>
  <name>d34c701f-275c-49d1-92f7-a952d7d5e967</name>
  <uuid>d34c701f-275c-49d1-92f7-a952d7d5e967</uuid>
  <capacity unit='bytes'>6593848541184</capacity>
  <allocation unit='bytes'>505990348800</allocation>
  <available unit='bytes'>6087858192384</available>
  <source>
  </source>
  <target>
    <path>/gluster/qcow2</path>
    <permissions>
      <mode>0700</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

On Tue, Jul 16, 2013 at 4:30 AM, Jacob Yundt <jyundt at gmail.com> wrote:
>> I'm using gluster 3.3.0 and 3.3.1 with xfs bricks and kvm based VM's
>> using qcow2 files on gluster volume fuse mounts. CentOS6.2 through 6.4
>> w/CloudStack 3.0.2 - 4.1.0.
>>
>> I've not had any problems. Here is 1 host in a small 3 host cluster
>> (using the cloudstack terminology). about 30 VM's are running across
>> these 3 hosts - which all contribute to the volume with 2 bricks each.
>> I'll also attach a virsh dumpxml for you to take a look at.
>>
>> [root ~]# w
>>  06:21:53 up 320 days,  7:23,  1 user,  load average: 1.41, 1.07, 0.79
>> USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
>> root     pts/9    10.100.0.100     06:21    0.00s  0.00s  0.00s w
>> [root ~]# cat /etc/redhat-release
>> CentOS release 6.3 (Final)
>> [root ~]# rpm -qa | grep gluster
>> glusterfs-server-3.3.0-1.el6.x86_64
>> glusterfs-fuse-3.3.0-1.el6.x86_64
>> glusterfs-3.3.0-1.el6.x86_64
>> [root@ ~]# cat /etc/fstab | grep glust
>> /dev/storage/glust0    /gluster/0        xfs    defaults,inode64 0 0
>> /dev/storage/glust1    /gluster/1        xfs    defaults,inode64 0 0
>> 172.16.0.11:qcow2-share /gluster/qcow2        glusterfs    defaults,_netdev 0 0
>> [root at cs0.la.vorstack.net ~]# df -h
>> [cut.....]
>> /dev/mapper/storage-glust0
>>                       2.0T  217G  1.8T  11% /gluster/0
>> /dev/mapper/storage-glust1
>>                       2.0T  148G  1.9T   8% /gluster/1
>> 172.16.0.11:qcow2-share
>>                       6.0T  472G  5.6T   8% /gluster/qcow2
>> [root@ ~]# virsh list
>>  Id    Name                           State
>> ----------------------------------------------------
>>  10    i-2-19-VM                      running
>>  21    i-3-44-VM                      running
>>  22    i-2-12-VM                      running
>>  28    i-4-58-VM                      running
>>  37    s-5-VM                         running
>>  38    v-2-VM                         running
>>  39    i-2-56-VM                      running
>>  41    i-7-59-VM                      running
>>  46    i-4-87-VM                      running
>> [root@ ~]# gluster volume info
>>
>> Volume Name: qcow2-share
>> Type: Distributed-Replicate
>> Volume ID: 22fcbaa9-4b2d-4d84-9353-eb77abcaf0db
>> Status: Started
>> Number of Bricks: 3 x 2 = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: 172.16.0.10:/gluster/0
>> Brick2: 172.16.0.11:/gluster/0
>> Brick3: 172.16.0.12:/gluster/0
>> Brick4: 172.16.0.10:/gluster/1
>> Brick5: 172.16.0.11:/gluster/1
>> Brick6: 172.16.0.12:/gluster/1
>> [root@ ~]# gluster volume status
>> Status of volume: qcow2-share
>> Gluster process                        Port    Online    Pid
>> ------------------------------------------------------------------------------
>> Brick 172.16.0.10:/gluster/0                24009    Y    1873
>> Brick 172.16.0.11:/gluster/0                24009    Y    1831
>> Brick 172.16.0.12:/gluster/0                24009    Y    1938
>> Brick 172.16.0.10:/gluster/1                24010    Y    1878
>> Brick 172.16.0.11:/gluster/1                24010    Y    1837
>> Brick 172.16.0.12:/gluster/1                24010    Y    1953
>> NFS Server on localhost                    38467    Y    1899
>> Self-heal Daemon on localhost                N/A    Y    1909
>> NFS Server on 172.16.0.12                38467    Y    1959
>> Self-heal Daemon on 172.16.0.12                N/A    Y    1964
>> NFS Server on 172.16.0.11                38467    Y    1843
>> Self-heal Daemon on 172.16.0.11                N/A    Y    1848
>>
>>
>
> This information (including the attached xml) is very helpful, thank
> you!  Can you provide the xml of your (KVM) gluster storage pool:
> "virsh pool-dumpxml <pool>"
>
> Have you ever tried using "raw" virtio disk images?  When I try to use
> qcow2, I get errors when trying to start my VM:
>
> qemu-kvm: -drive
> file=/var/lib/libvirt/images/xfs/kvm2.img,if=none,id=drive-virtio-disk1,format=qcow2,cache=none:
> could not open disk image /var/lib/libvirt/images/xfs/kvm2.img:
> Invalid argument
>
> -Jacob



More information about the Gluster-users mailing list