[Gluster-users] KVM guest I/O errors with xfs backed gluster volumes
Bryan Whitehead
driver at megahappy.net
Tue Jul 16 06:43:42 UTC 2013
I'm using gluster 3.3.0 and 3.3.1 with xfs bricks and kvm based VM's
using qcow2 files on gluster volume fuse mounts. CentOS6.2 through 6.4
w/CloudStack 3.0.2 - 4.1.0.
I've not had any problems. Here is 1 host in a small 3 host cluster
(using the cloudstack terminology). about 30 VM's are running across
these 3 hosts - which all contribute to the volume with 2 bricks each.
I'll also attach a virsh dumpxml for you to take a look at.
[root ~]# w
06:21:53 up 320 days, 7:23, 1 user, load average: 1.41, 1.07, 0.79
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/9 10.100.0.100 06:21 0.00s 0.00s 0.00s w
[root ~]# cat /etc/redhat-release
CentOS release 6.3 (Final)
[root ~]# rpm -qa | grep gluster
glusterfs-server-3.3.0-1.el6.x86_64
glusterfs-fuse-3.3.0-1.el6.x86_64
glusterfs-3.3.0-1.el6.x86_64
[root@ ~]# cat /etc/fstab | grep glust
/dev/storage/glust0 /gluster/0 xfs defaults,inode64 0 0
/dev/storage/glust1 /gluster/1 xfs defaults,inode64 0 0
172.16.0.11:qcow2-share /gluster/qcow2 glusterfs defaults,_netdev 0 0
[root at cs0.la.vorstack.net ~]# df -h
[cut.....]
/dev/mapper/storage-glust0
2.0T 217G 1.8T 11% /gluster/0
/dev/mapper/storage-glust1
2.0T 148G 1.9T 8% /gluster/1
172.16.0.11:qcow2-share
6.0T 472G 5.6T 8% /gluster/qcow2
[root@ ~]# virsh list
Id Name State
----------------------------------------------------
10 i-2-19-VM running
21 i-3-44-VM running
22 i-2-12-VM running
28 i-4-58-VM running
37 s-5-VM running
38 v-2-VM running
39 i-2-56-VM running
41 i-7-59-VM running
46 i-4-87-VM running
[root@ ~]# gluster volume info
Volume Name: qcow2-share
Type: Distributed-Replicate
Volume ID: 22fcbaa9-4b2d-4d84-9353-eb77abcaf0db
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 172.16.0.10:/gluster/0
Brick2: 172.16.0.11:/gluster/0
Brick3: 172.16.0.12:/gluster/0
Brick4: 172.16.0.10:/gluster/1
Brick5: 172.16.0.11:/gluster/1
Brick6: 172.16.0.12:/gluster/1
[root@ ~]# gluster volume status
Status of volume: qcow2-share
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 172.16.0.10:/gluster/0 24009 Y 1873
Brick 172.16.0.11:/gluster/0 24009 Y 1831
Brick 172.16.0.12:/gluster/0 24009 Y 1938
Brick 172.16.0.10:/gluster/1 24010 Y 1878
Brick 172.16.0.11:/gluster/1 24010 Y 1837
Brick 172.16.0.12:/gluster/1 24010 Y 1953
NFS Server on localhost 38467 Y 1899
Self-heal Daemon on localhost N/A Y 1909
NFS Server on 172.16.0.12 38467 Y 1959
Self-heal Daemon on 172.16.0.12 N/A Y 1964
NFS Server on 172.16.0.11 38467 Y 1843
Self-heal Daemon on 172.16.0.11 N/A Y 1848
On Mon, Jul 15, 2013 at 7:37 PM, Jacob Yundt <jyundt at gmail.com> wrote:
> Unfortunately I'm hitting the same problem with 3.4.0 GA. In case it
> helps, I increased both the client and server brick logs to TRACE.
> I've updated the BZ[1] and attached both logs + an strace.
>
> Anyone else using XFS backed bricks for hosting KVM images? If so,
> what xfs mkfs/mount options are you using? Additionally, what format
> are your KVM images?
>
> I'm trying to figure out what is unique about my config. It seems
> pretty vanilla (one gluster client, one gluster server, one brick, one
> volume, etc) but I wonder if I'm missing something obvious.
>
> -Jacob
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=958781
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
A non-text attachment was scrubbed...
Name: i-4-87-VM.xml
Type: text/xml
Size: 2586 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130715/1065f404/attachment.xml>
More information about the Gluster-users
mailing list