[Gluster-users] gluster 3.2.0 - totally broken?
mohitanchlia at gmail.com
Thu May 19 17:54:49 UTC 2011
I also requests users facing issues to open bugs if they think it is a
bug. This will help in keeping track of the bugs so that it doesn't
really go unnoticed atleast. It will also help others when they face
On Thu, May 19, 2011 at 10:30 AM, Anthony J. Biacco
<abiacco at formatdynamics.com> wrote:
> My downgrade to 3.1.4 went ok, I did do the volume reset from the start.
> Like you said, not as easy as an upgrade, but I wasn't expecting it to
> The key for me was stopping the daemon on the primary server, removing
> the peer files, restarting the daemon. Then shut down the daemon on the
> secondary servers, remove all the glusterd config files, restart the
> daemon, then do a peer probe from the primary for all the secondaries (I
> had only one).
> Manager, IT Operations
> Format Dynamics, Inc.
> P: 303-228-7327
> F: 303-228-7305
> abiacco at formatdynamics.com
>> -----Original Message-----
>> From: gluster-users-bounces at gluster.org [mailto:gluster-users-
>> bounces at gluster.org] On Behalf Of Dan Bretherton
>> Sent: Thursday, May 19, 2011 11:20 AM
>> To: gluster-users at gluster.org
>> Subject: Re: [Gluster-users] gluster 3.2.0 - totally broken?
>> > Message: 2 Date: Wed, 18 May 2011 19:00:30 +0200 From: Udo Waechter
>> > <udo.waechter at uni-osnabrueck.de> Subject: Re: [Gluster-users]
>> > 3.2.0 - totally broken? To: Gluster Users
> <gluster-users at gluster.org>
>> > Message-ID: <948199A7-C1EE-42CB-8540-8856000D0C0E at uni-
>> > Content-Type: text/plain; charset="windows-1252" On 18.05.2011, at
>> > 18:56, Anthony J. Biacco wrote:
>> >> >
>> >> > I?m actually thinking of downgrading to 3.1.3 from 3.2.0. Wonder
> if I?d
>> have any ill-effects on the volume with a simple rpm downgrade and
>> daemon restart.
>> > I read somewhere in the docs that you need to reset the volume
>> > gluster volume reset<volname>
>> > good luck. Would be nice to hear if it worked for you.
>> > --udo.
>> > -- :: udo waechter - root at zoide.net :: N 52?16'30.5" E 8?3'10.1" ::
>> > genuine input for your ears: http://auriculabovinari.de :: your
>> > http://ezag.zoide.net :: your brain: http://zoide.net --------------
>> > next part -------------- A non-text attachment was scrubbed... Name:
>> > smime.p7s Type: application/pkcs7-signature Size: 2427 bytes Desc:
>> > available URL:
>> > <http://gluster.org/pipermail/gluster-
>> Hello All- A few words of warning about downgrading, after what
>> to me when I tried it.
>> I downgraded from 3.2 to 3.1.4, but I am back on 3.2 again now because
>> the downgrade broke the rebalancing feature. I thought this might
>> been due to version 3.2 having done something to the xattrs. I tried
>> downgrading to 3.1.3 and 3.1.2 as well, but rebalance was also not
>> working in those versions, having worked successfully in the past.
>> I found that the downgrade didn't go as smoothly as the upgrades
>> do. After downgrading the RPMs on the servers and restarting
>> I couldn't mount the volumes, and the client logs were flooded with
>> errors like these for each server.
>> [2011-05-03 18:05:26.563591] E
>> [client-handshake.c:1101:client_query_portmap_cbk] 0-atmos-client-1:
>> failed to get the port number for remote subvolume
>> [2011-05-03 18:05:26.564543] I [client.c:1601:client_rpc_notify]
>> 0-atmos-client-1: disconnected
>> I didn't need to reset the volumes after downgrading because none of
>> volume files had been created or reset under version 3.2. Despite
>> I did try doing "gluster volume reset <volname>" for all the volumes,
>> but it didn't stop the client log errors or solve the mounting
>> I desperation I unmounted all the volumes from the clients and shut
>> all the gluster related processes on all the servers. After waiting a
>> few minutes for any locked ports to clear (in case locked ports had
>> causing the problems after the RPM downgrades) I restarted glusterd on
>> the servers, and then a few minutes later I was able to mount the
>> volumes again. I discovered that I could no longer rebalance
>> (fix-layout or migrate-data) a few days later.
>> To answer an earlier question, I am using 3.2 in a production
>> environment, although in the light of recent discussions in this
>> I wish I wasn't. Having said that, my users haven't reported any
>> problems nearly a week after the upgrade, so I am hoping that we won't
>> be affected by any of the issues that have been causing problems at
>> other sites.
>> Gluster-users mailing list
>> Gluster-users at gluster.org
> Gluster-users mailing list
> Gluster-users at gluster.org
More information about the Gluster-users