[Gluster-users] glusterfs 3.4

Justin Dossey jbd at podomatic.com
Mon Dec 2 03:08:59 UTC 2013


Bernhard,
Just based on my *nix admin experience, "Address already in use" means that
some process is already listening on the address and port listed.  In your
case, are you sure the old GlusterFS processes are stopped prior to
starting the new? A "sudo netstat -lnp |grep 49153" would show what was
holding the port and preventing GlusterFS 3.4.x from binding to it.  If
you're not seeing anything there, it's more likely you've hit some upgrade
bug.


On Sun, Dec 1, 2013 at 8:41 AM, Josh Boon <gluster at joshboon.com> wrote:

> Hey Bernhard,
>
> That was just enough to jog my memory :) Gluster 3.4 changed the port
> selection they used. They went from ports 24007-24012 in gluster 3.3 to
> 49153 and up in 3.4 The problem is that libvirt and gluster aren't aware of
> each other and and when qemu doesn't get its port it thinks the world is
> ending and gives up. I found trying the migration a second time can help.
> You can also track the full problem over at
> https://bugzilla.redhat.com/show_bug.cgi?id=987555 which has both teams
> in there discussing how to handle the port stuff. If you absolutely need
> live migration you can do it by hand using qemu directly and pick your
> ports as to not collide until the port range problem is solved.
>
> Also you should consider moving your guests images to using gfapi instead
> of the gluster FUSE mount you're using if you need more performance. It
> requires a couple hacks and a recompile on Ubuntu but we use it over here
> and couldn't live without it.  I'll be posting a writeup soon if you're
> interested.
>
> Best,
> Josh
>
> ------------------------------
> *From: *"Bernhard Glomm" <bernhard.glomm at ecologic.eu>
> *To: *gluster at joshboon.com
> *Sent: *Sunday, December 1, 2013 11:25:07 AM
> *Subject: *Re: [Gluster-users] glusterfs 3.4
>
> Hey Josh,
>
> thnx for reply
> well to be honest... nothing special at all.
> I didn't set any special gluster options.
> I created the mirror like before in 3.3
> gluster volume create <vol_name> replica 2 <ip>:/<path> <ip>:/<path>
> EXACT the same command both times (thnx copyndpaste ;-)
> I mounted the volume exactly the same on both machines
> mount -t glusterfs <my_own_ip>:/<vol-name> <mountpoint>
> (asuming this is fuse mount?)
> I (tried to) migrate/ed the VM with the same command
> migrate --verbose --live --unsafe --p2p --domain <dom_name> --desturi
> qemu+ssh://<ip>/system
>
> with version 3.3 no problem
> with version 3.4.1 I got
>
> "-incoming tcp:0.0.0.0:49153: Failed to bind socket: Address already in
> use"
>
> any hints on this?
>
> I can do with 3.3 for now, works like a charm
> but would be happy to feel more comfortable about possible upgrades ;-)
>
> best
> Bernhard
>
>
> Am 30.11.2013 23:27:14, schrieb Josh Boon:
>
> Hey Bernhard,
>
> I'm running a very similar setup to you.  What are your gluster options?
> What's the migration command you're using? Are you using gfapi or a fuse
> mount? I think I've hit this error before but I'll need some info to jog
> brain on how I fixed it.
>
> Best,
> Josh
>
> ------------------------------
> *From: *"Bernhard Glomm" <bernhard.glomm at ecologic.eu>
> *To: *gluster-users at gluster.org
> *Sent: *Saturday, November 30, 2013 8:13:26 AM
> *Subject: *[Gluster-users] glusterfs 3.4
>
> Hi all, I just stumbled over a possible bug in glusterfs-3.4.1 Since I
> used the package from ppa:semiosis/ubuntu-glusterfs-3.4 for ubuntu (running
> 13.04 up to date) I like to report here first / ask for confirmation. I had
> glusterfs 3.3 installed and it worked fine. I used it as the storage
> backend for the image files of my kvm virtualized instances (two sided
> mirror/two kvm hosts running four VM as a test environment) I upgraded to
> 3.4 and all seemed okay on the first glance, but live-migration failed with
> error: "-incoming tcp:0.0.0.0:49153: Failed to bind socket: Address
> already in use" reverting back to glusterfs 3.3 and the problem was gone
> anybody knows about that prob? couldn't find anything on the net about it
> yet. best regards Bernhard -- sysadmin www.ecologic.eu
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> --
> ------------------------------
> [image: *Ecologic Institute*] *Bernhard Glomm*
> IT Administration
>
> Phone: +49 (30) 86880 134 Fax: +49 (30) 86880 100 Skype: bernhard.glomm.ecologic
> [image: Website:] <http://ecologic.eu> [image: | Video:]<http://www.youtube.com/v/hZtiK04A9Yo> [image:
> | Newsletter:] <http://ecologic.eu/newsletter/subscribe> [image: |
> Facebook:] <http://www.facebook.com/Ecologic.Institute> [image: |
> Linkedin:]<http://www.linkedin.com/company/ecologic-institute-berlin-germany> [image:
> | Twitter:] <http://twitter.com/EcologicBerlin> [image: | YouTube:]<http://www.youtube.com/user/EcologicInstitute> [image:
> | Google+:] <http://plus.google.com/113756356645020994482>
> Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717
> Berlin | Germany
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> DE811963464
> Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> ------------------------------
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
Justin Dossey
CTO, PodOmatic
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131201/b4a23d72/attachment.html>


More information about the Gluster-users mailing list