[Gluster-users] recommended upgrade procedure from gluster-3.2.7 to gluster-3.5.0
Todd Pfaff
pfaff at rhpcs.mcmaster.ca
Mon Jun 2 15:56:11 UTC 2014
On Sun, 1 Jun 2014, Pranith Kumar Karampuri wrote:
>
>
> ----- Original Message -----
>> From: "Todd Pfaff" <pfaff at rhpcs.mcmaster.ca>
>> To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
>> Cc: gluster-users at gluster.org
>> Sent: Saturday, May 31, 2014 7:18:23 PM
>> Subject: Re: [Gluster-users] recommended upgrade procedure from gluster-3.2.7 to gluster-3.5.0
>>
>> thanks, pranith, that was very helpful!
>>
>> i followed your advice, it ran and completed, and now i'm left with these
>> results on the removed brick (before commit):
>>
>> find /2/scratch/ | wc -l
>> 83083
>>
>> find /2/scratch/ -type f | wc -l
>> 16
>>
>> find /2/scratch/ -type d | wc -l
>> 70243
>>
>> find /2/scratch/ -type l | wc -l
>> 12824
>>
>> find /2/scratch/ ! -type d -a ! -type f | wc -l
>> 12824
>>
>> find /2/scratch/.glusterfs -type l | wc -l
>> 12824
>>
>> find /2/scratch/* | wc -l
>> 12873
>>
>> find /2/scratch/* -type d | wc -l
>> 12857
>>
>> so it looks like i have 16 files and 12857 directories left in /2/scratch,
>> and 12824 links under /2/scratch/.glusterfs/.
>>
>> my first instinct is to ignore (and remove) the many remaining directories
>> that are empty and only look closer at those that contain the 16 remaining
>> files.
>>
>> can i ignore the links under /2/scratch/.glusterfs?
>>
>> as for the 16 files that remain, i can migrate them manually if necessary
>> but i'll first look at all the brick filesystems to see if they already
>> exist elsewhere in some form.
>>
>> do you recommend i do anything else?
>
> Your solutions are good :-). Could you please send us the
> configuration, logs of the setup so that we can debug why those files
> didn't move? It would be good if we can find the reason for it and fix
> it in the next releases so that this issue is prevented.
sure, i'd be happy to help. what exactly should i send you in terms of
configuration? just my /etc/glusterfs/glusterd.vol? output of some
gluster commands? other?
in terms of logs, what do you want to see? do you want this file in its
entirety?
-rw------- 1 root root 145978018 May 31 08:10
/var/log/glusterfs/scratch-rebalance.log
anything else?
>
> CC developers who work on this feature to look into the issue.
>
> Just curious, did the remove-brick status output say if any failures
> happened?
i don't recall seeing anything in the remove-brick status command output
that indicated any failures.
tp
>
> Pranith
>
>>
>> thanks,
>> tp
>>
>>
>> On Fri, 30 May 2014, Pranith Kumar Karampuri wrote:
>>
>>>
>>>
>>> ----- Original Message -----
>>>> From: "Todd Pfaff" <pfaff at rhpcs.mcmaster.ca>
>>>> To: gluster-users at gluster.org
>>>> Sent: Saturday, May 31, 2014 1:58:33 AM
>>>> Subject: Re: [Gluster-users] recommended upgrade procedure from
>>>> gluster-3.2.7 to gluster-3.5.0
>>>>
>>>> On Sat, 24 May 2014, Todd Pfaff wrote:
>>>>
>>>>> I have a gluster distributed volume that has been running nicely with
>>>>> gluster-3.2.7 for the past two years and I now want to upgrade this to
>>>>> gluster-3.5.0.
>>>>>
>>>>> What is the recommended procedure for such an upgrade? Is it necessary
>>>>> to
>>>>> upgrade from 3.2.7 to 3.3 to 3.4 to 3.5, or can I safely transition from
>>>>> 3.2.7 directly to 3.5.0?
>>>>
>>>> nobody responded so i decided to wing it and hope for the best.
>>>>
>>>> i also decided to go directly from 3.2.7 to 3.4.3 and not bother with
>>>> 3.5 yet.
>>>>
>>>> the volume is distributed across 13 bricks. formerly these were in 13
>>>> nodes, 1 brick per node, but i recently lost one of these nodes.
>>>> i've moved the brick from the dead node to be a second brick in one of
>>>> the remaining 12 nodes. i currently have this state:
>>>>
>>>> gluster volume status
>>>> Status of volume: scratch
>>>> Gluster process Port Online Pid
>>>> ------------------------------------------------------------------------------
>>>> Brick 172.16.1.1:/1/scratch 49152 Y 6452
>>>> Brick 172.16.1.2:/1/scratch 49152 Y 10783
>>>> Brick 172.16.1.3:/1/scratch 49152 Y 10164
>>>> Brick 172.16.1.4:/1/scratch 49152 Y 10465
>>>> Brick 172.16.1.5:/1/scratch 49152 Y 10186
>>>> Brick 172.16.1.6:/1/scratch 49152 Y 10388
>>>> Brick 172.16.1.7:/1/scratch 49152 Y 10386
>>>> Brick 172.16.1.8:/1/scratch 49152 Y 10215
>>>> Brick 172.16.1.9:/1/scratch 49152 Y 11059
>>>> Brick 172.16.1.10:/1/scratch 49152 Y 9238
>>>> Brick 172.16.1.11:/1/scratch 49152 Y 9466
>>>> Brick 172.16.1.12:/1/scratch 49152 Y 10777
>>>> Brick 172.16.1.1:/2/scratch 49153 Y 6461
>>>>
>>>>
>>>> what i want to do next is remove Brick 172.16.1.1:/2/scratch and have
>>>> all files it contains redistributed across the other 12 bricks.
>>>>
>>>> what's the correct procedure for this? is it as simple as:
>>>>
>>>> gluster volume remove-brick scratch 172.16.1.1:/2/scratch start
>>>>
>>>> and then wait for all files to be moved off that brick? or do i also
>>>> have to do:
>>>>
>>>> gluster volume remove-brick scratch 172.16.1.1:/2/scratch commit
>>>>
>>>> and then wait for all files to be moved off that brick? or do i also
>>>> have to do something else, such as a rebalance, to cause the files to
>>>> be moved?
>>>
>>> 'gluster volume remove-brick scratch 172.16.1.1:/2/scratch start' does
>>> start the process of migrating all the files to the other bricks. You need
>>> to observe the progress of the process using 'gluster volume remove-brick
>>> scratch 172.16.1.1:/2/scratch status' Once this command says 'completed'
>>> You should execute 'gluster volume remove-brick scratch
>>> 172.16.1.1:/2/scratch commit' to completely remove this brick from the
>>> volume. I am a bit paranoid so I would check that no files are left behind
>>> by doing a find on the brick 172.16.1.1:/2/scratch just before issuing the
>>> 'commit' :-).
>>>
>>> Pranith.
>>>
>>>>
>>>> how do i know when everything has been moved safely to other bricks and
>>>> the then-empty brick is no longer involved in the cluster?
>>>>
>>>> thanks,
>>>> tp
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
>>
>
>
More information about the Gluster-users
mailing list