[Gluster-users] gluster and kvm livemigration

Bernhard Glomm bernhard.glomm at ecologic.eu
Mon Jan 20 15:14:51 UTC 2014


On 17.01.2014 16:45:20, Samuli Heinonen wrote:
> Hello Bernhard,
> Can you test if setting option network.remote-dio to enable allows you to use cache=none?
> 

Hi Samuli,nope network.remote-dio enable didn't changed it.got a too unclean system meanwhile,will do a full reinstall of the hostsand than come back to this topicthnx Bernhard
> -samuli
> 
> Bernhard Glomm <> bernhard.glomm at ecologic.eu> > kirjoitti 17.1.2014 kello 16.41:
> 
> > Pranith,
> > > > I stopped the volume
> > > > started it again,
> > > > mounted it on both hosts
> > > > started the VM
> > > > did the livemigration
> > > > and collected the logs:
> > > > - etc-glusterfs-glusterd.vol.log
> > > > - glustershd.log
> > > > - srv-vm_infrastructure-vm-atom01.log
> > > > - cli.log
> > > > from the beginning of the gluster volume start.
> > > > You can found them here (part 1 to 3):
> > > > http://pastebin.com/mnATm2BE
> > > > http://pastebin.com/RYZFP3E9
> > > > http://pastebin.com/HAXEGd54
> > 
> > further more:
> > > > gluster --version: glusterfs 3.4.2 built on Jan 11 2014 03:21:47
> > > > ubuntu: raring
> > > > filesystem on the gluster bricks: zfs-0.6.2
> > 
> > gluster volume info fs_vm_atom01 
> > > > Volume Name: fs_vm_atom01
> > > > Type: Replicate
> > > > Volume ID: fea9bdcf-783e-442a-831d-f564f8dbe551
> > > > Status: Started
> > > > Number of Bricks: 1 x 2 = 2
> > > > Transport-type: tcp
> > > > Bricks:
> > > > Brick1: 172.24.1.11:/zp_ping_1/fs_vm_atom01
> > > > Brick2: 172.24.1.13:/zp_pong_1/fs_vm_atom01
> > > > Options Reconfigured:
> > > > diagnostics.client-log-level: DEBUG
> > > > server.allow-insecure: on
> > 
> > 
> > disk part of VM configuration
> > 
> > <emulator>/usr/bin/kvm-spice</emulator>
> > > >     <disk type='file' device='disk'>
> > > >       <driver name='qemu' type='raw' cache='writethrough'/>
> > > >       <source file='/srv/vm_infrastructure/vm/atom01/atom01.img'/>
> > > >       <target dev='vda' bus='virtio'/>
> > > >       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
> > > >     </disk>
> > 
> > 
> > can't use <source protocol='gluster' ...
> > > > as josh suggested because couldn't get
> > > > my qemu recompiled with gluster enabled yet.
> > 
> > Are there other special tuning parameter for kvm/qemu/ to set on gluster?
> > > > as mentioned: all works except the livemigration (disk image file becomes read only)
> > > > and I have to use something different than cache=none...
> > 
> > TIA
> > 
> > Bernhard
> > 

> > On 17.01.2014 05:04:52, Pranith Kumar Karampuri wrote:
> > > Bernhard,
> > > Configuration seems ok. Could you please give the log files of the bricks and mount please. If you think it is not a big procedure to do this live migration, could you set client-log-level to DEBUG and provide the log files of that run.
> > > 
> > > Pranith
> > > 
> > > ----- Original Message -----
> > > > From: "Bernhard Glomm" <> > > > bernhard.glomm at ecologic.eu> > > > >
> > > > To:> > > >  > > > > pkarampu at redhat.com
> > > > Cc:> > > >  > > > > gluster-users at gluster.org
> > > > Sent: Thursday, January 16, 2014 5:58:17 PM
> > > > Subject: Re: [Gluster-users] gluster and kvm livemigration
> > > > 
> > > > 
> > > > hi Pranith
> > > > 
> > > > # gluster volume info fs_vm_atom01
> > > >  
> > > > Volume Name: fs_vm_atom01
> > > > Type: Replicate
> > > > Volume ID: fea9bdcf-783e-442a-831d-f564f8dbe551
> > > > Status: Started
> > > > Number of Bricks: 1 x 2 = 2
> > > > Transport-type: tcp
> > > > Bricks:
> > > > Brick1: 172.24.1.11:/zp_ping_1/fs_vm_atom01
> > > > Brick2: 172.24.1.13:/zp_pong_1/fs_vm_atom01
> > > > Options Reconfigured:
> > > > diagnostics.client-log-level: ERROR
> > > > 
> > > > 
> > > > TIA
> > > > Bernhard
> > > > 
> > > > 
> > > > On 16.01.2014 13:05:12, Pranith Kumar Karampuri wrote:
> > > > > hi Bernhard,
> > > > > Could you give gluster volume info output?
> > > > > 
> > > > > Pranith
> > > > > 
> > > > > ----- Original Message -----
> > > > > > From: "Bernhard Glomm" <> >> > > > > >  > > > > > > bernhard.glomm at ecologic.eu> > > > > > > > >
> > > > > > To: > >> > > > > >  > > > > > > gluster-users at gluster.org
> > > > > > Sent: Thursday, January 16, 2014 4:22:36 PM
> > > > > > Subject: [Gluster-users] gluster and kvm livemigration
> > > > > > 
> > > > > > I experienced a strange behavior of glusterfs during livemigration
> > > > > > of a qemu-kvm guest
> > > > > > using a 10GB file on a mirrored gluster 3.4.2 volume
> > > > > > (both on ubuntu 13.04)
> > > > > > I run
> > > > > > virsh migrate --verbose --live --unsafe --p2p --domain atom01 --desturi
> > > > > > qemu+ssh://<target_ip>/system
> > > > > > and the migration works,
> > > > > > the running machine is pingable and keeps sending pings.
> > > > > > nevertheless, when I let the machine touch a file during migration
> > > > > > it stops, complaining that it's filesystem is read only (from that moment
> > > > > > that
> > > > > > migration finished)
> > > > > > A reboot from inside the machine failes,
> > > > > > machine goes down and comes up with an error
> > > > > > unable to write to sector xxxxxx on hd0
> > > > > > (than falling into the initrd).
> > > > > > a
> > > > > > virsh destroy VM && virsh start VM
> > > > > > leads to a perfect running VM again,
> > > > > > no matter on which of the two hosts I start the machine
> > > > > > anybody better experience with livemigration?
> > > > > > any hint on a procedure how to debug that?
> > > > > > TIA
> > > > > > Bernhard
> > > > > > 
> > > > > > --
> > > > > > 
> > > > > > 
> > > > > > Bernhard Glomm
> > > > > > IT Administration
> > > > > > 
> > > > > > Phone: +49 (30) 86880 134
> > > > > > Fax: +49 (30) 86880 100
> > > > > > Skype: bernhard.glomm.ecologic
> > > > > > 
> > > > > > Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717
> > > > > > Berlin
> > > > > > | Germany
> > > > > > GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> > > > > > DE811963464
> > > > > > Ecologic is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> > > > > > 
> > > > > > 
> > > > > > _______________________________________________
> > > > > > Gluster-users mailing list
> > > > > > Gluster-users at gluster.org
> > > > > > http://supercolony.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140120/25169085/attachment.html>


More information about the Gluster-users mailing list