[Gluster-users] recommended upgrade procedure from gluster-3.2.7 to gluster-3.5.0

Pranith Kumar Karampuri pkarampu at redhat.com
Sat May 31 00:54:32 UTC 2014



----- Original Message -----
> From: "Todd Pfaff" <pfaff at rhpcs.mcmaster.ca>
> To: gluster-users at gluster.org
> Sent: Saturday, May 31, 2014 1:58:33 AM
> Subject: Re: [Gluster-users] recommended upgrade procedure from gluster-3.2.7 to gluster-3.5.0
> 
> On Sat, 24 May 2014, Todd Pfaff wrote:
> 
> > I have a gluster distributed volume that has been running nicely with
> > gluster-3.2.7 for the past two years and I now want to upgrade this to
> > gluster-3.5.0.
> >
> > What is the recommended procedure for such an upgrade?  Is it necessary to
> > upgrade from 3.2.7 to 3.3 to 3.4 to 3.5, or can I safely transition from
> > 3.2.7 directly to 3.5.0?
> 
> nobody responded so i decided to wing it and hope for the best.
> 
> i also decided to go directly from 3.2.7 to 3.4.3 and not bother with
> 3.5 yet.
> 
> the volume is distributed across 13 bricks.  formerly these were in 13
> nodes, 1 brick per node, but i recently lost one of these nodes.
> i've moved the brick from the dead node to be a second brick in one of
> the remaining 12 nodes.  i currently have this state:
> 
>    gluster volume status
>    Status of volume: scratch
>    Gluster process                                 Port    Online  Pid
>    ------------------------------------------------------------------------------
>    Brick 172.16.1.1:/1/scratch                     49152   Y       6452
>    Brick 172.16.1.2:/1/scratch                     49152   Y       10783
>    Brick 172.16.1.3:/1/scratch                     49152   Y       10164
>    Brick 172.16.1.4:/1/scratch                     49152   Y       10465
>    Brick 172.16.1.5:/1/scratch                     49152   Y       10186
>    Brick 172.16.1.6:/1/scratch                     49152   Y       10388
>    Brick 172.16.1.7:/1/scratch                     49152   Y       10386
>    Brick 172.16.1.8:/1/scratch                     49152   Y       10215
>    Brick 172.16.1.9:/1/scratch                     49152   Y       11059
>    Brick 172.16.1.10:/1/scratch                    49152   Y       9238
>    Brick 172.16.1.11:/1/scratch                    49152   Y       9466
>    Brick 172.16.1.12:/1/scratch                    49152   Y       10777
>    Brick 172.16.1.1:/2/scratch                     49153   Y       6461
> 
> 
> what i want to do next is remove Brick 172.16.1.1:/2/scratch and have
> all files it contains redistributed across the other 12 bricks.
> 
> what's the correct procedure for this?  is it as simple as:
> 
>    gluster volume remove-brick scratch 172.16.1.1:/2/scratch start
> 
> and then wait for all files to be moved off that brick?  or do i also
> have to do:
> 
>    gluster volume remove-brick scratch 172.16.1.1:/2/scratch commit
> 
> and then wait for all files to be moved off that brick?  or do i also
> have to do something else, such as a rebalance, to cause the files to
> be moved?

'gluster volume remove-brick scratch  172.16.1.1:/2/scratch start' does start the process of migrating all the files to the other bricks. You need to observe the progress of the process using 'gluster volume remove-brick scratch  172.16.1.1:/2/scratch status' Once this command says 'completed' You should execute 'gluster volume remove-brick scratch  172.16.1.1:/2/scratch commit' to completely remove this brick from the volume. I am a bit paranoid so I would check that no files are left behind by doing a find on the brick 172.16.1.1:/2/scratch just before issuing the 'commit' :-).

Pranith.

> 
> how do i know when everything has been moved safely to other bricks and
> the then-empty brick is no longer involved in the cluster?
> 
> thanks,
> tp
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 



More information about the Gluster-users mailing list