[Gluster-users] gluster and kvm livemigration
Pranith Kumar Karampuri
pkarampu at redhat.com
Fri Jan 17 04:04:52 UTC 2014
Bernhard,
Configuration seems ok. Could you please give the log files of the bricks and mount please. If you think it is not a big procedure to do this live migration, could you set client-log-level to DEBUG and provide the log files of that run.
Pranith
----- Original Message -----
> From: "Bernhard Glomm" <bernhard.glomm at ecologic.eu>
> To: pkarampu at redhat.com
> Cc: gluster-users at gluster.org
> Sent: Thursday, January 16, 2014 5:58:17 PM
> Subject: Re: [Gluster-users] gluster and kvm livemigration
>
>
> hi Pranith
>
> # gluster volume info fs_vm_atom01
>
> Volume Name: fs_vm_atom01
> Type: Replicate
> Volume ID: fea9bdcf-783e-442a-831d-f564f8dbe551
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 172.24.1.11:/zp_ping_1/fs_vm_atom01
> Brick2: 172.24.1.13:/zp_pong_1/fs_vm_atom01
> Options Reconfigured:
> diagnostics.client-log-level: ERROR
>
>
> TIA
> Bernhard
>
>
> On 16.01.2014 13:05:12, Pranith Kumar Karampuri wrote:
> > hi Bernhard,
> > Could you give gluster volume info output?
> >
> > Pranith
> >
> > ----- Original Message -----
> > > From: "Bernhard Glomm" <> > bernhard.glomm at ecologic.eu> > >
> > > To: > > gluster-users at gluster.org
> > > Sent: Thursday, January 16, 2014 4:22:36 PM
> > > Subject: [Gluster-users] gluster and kvm livemigration
> > >
> > > I experienced a strange behavior of glusterfs during livemigration
> > > of a qemu-kvm guest
> > > using a 10GB file on a mirrored gluster 3.4.2 volume
> > > (both on ubuntu 13.04)
> > > I run
> > > virsh migrate --verbose --live --unsafe --p2p --domain atom01 --desturi
> > > qemu+ssh://<target_ip>/system
> > > and the migration works,
> > > the running machine is pingable and keeps sending pings.
> > > nevertheless, when I let the machine touch a file during migration
> > > it stops, complaining that it's filesystem is read only (from that moment
> > > that
> > > migration finished)
> > > A reboot from inside the machine failes,
> > > machine goes down and comes up with an error
> > > unable to write to sector xxxxxx on hd0
> > > (than falling into the initrd).
> > > a
> > > virsh destroy VM && virsh start VM
> > > leads to a perfect running VM again,
> > > no matter on which of the two hosts I start the machine
> > > anybody better experience with livemigration?
> > > any hint on a procedure how to debug that?
> > > TIA
> > > Bernhard
> > >
> > > --
> > >
> > >
> > > Bernhard Glomm
> > > IT Administration
> > >
> > > Phone: +49 (30) 86880 134
> > > Fax: +49 (30) 86880 100
> > > Skype: bernhard.glomm.ecologic
> > >
> > > Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717
> > > Berlin
> > > | Germany
> > > GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> > > DE811963464
> > > Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> > >
> > >
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org
> > > http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> --
>
>
>
>
>
>
>
>
>
> Bernhard Glomm
>
> IT Administration
>
>
>
>
> Phone:
>
>
> +49 (30) 86880 134
>
>
> Fax:
>
>
> +49 (30) 86880 100
>
>
> Skype:
>
>
> bernhard.glomm.ecologic
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 |
> 10717 Berlin | Germany
>
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 |
> USt/VAT-IdNr.: DE811963464
>
> Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige
> GmbH
>
>
>
>
>
>
More information about the Gluster-users
mailing list