[Gluster-users] glusterFS 3.6.2 migrate data by remove brick	command
    Sander Zijlstra 
    sander.zijlstra at surfsara.nl
       
    Tue Apr 14 12:14:00 UTC 2015
    
    
  
Jiri,
thanks for the information, I just commented on a question about op-version….
I upgraded all systems to 3.6.2 does this mean they all will use the correct op-version and will not revert to old style behaviour?
Met vriendelijke groet / kind regards,
Sander Zijlstra
| Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 (0)6 43 99 12 47 | sander.zijlstra at surfsara.nl <mailto:sander.zijlstra at surfsara.nl> | www.surfsara.nl <http://www.surfsara.nl/> |
Regular day off on friday
> On 14 Apr 2015, at 14:11, Jiri Hoogeveen <j.hoogeveen at bluebillywig.com> wrote:
> 
> Hi Sander,
> 
> 
>> Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
> It should :)
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf <https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf> page 100, is I think a good start.
> I think this is the most complete documentation.
> 
>> When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 …. , right?
> 
> Yes, you need to remove both replica’s at the same time.
> 
> 
>> Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
> 
> 
> Depends on the size of the disk, the number of files and type of file. Network speed is less a issu, then the IO on the disks/brick.
> To migratie data from one disk to a other (is like self-healing) GlusterFS will do a scan of all files on the disk, which can cause a high IO on the disk.
> 
> Because you had also some performance issues, when you added some bricks, I will expect the same issue with remove brick. So do this at night if possible.
> 
> 
> Grtz, Jiri
> 
> 
>> On 14 Apr 2015, at 12:53, Sander Zijlstra <sander.zijlstra at surfsara.nl <mailto:sander.zijlstra at surfsara.nl>> wrote:
>> 
>> LS,
>> 
>> I’m planning to decommission a few servers from my cluster, so to confirm:
>> 
>> Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
>> When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 …. , right?
>> 
>> Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
>> 
>> Met vriendelijke groet / kind regards,
>> 
>> Sander Zijlstra
>> 
>> | Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 (0)6 43 99 12 47 | sander.zijlstra at surfsara.nl <mailto:sander.zijlstra at surfsara.nl> | www.surfsara.nl <http://www.surfsara.nl/> |
>> 
>> Regular day off on friday
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-users
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150414/0d1eaf36/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150414/0d1eaf36/attachment.sig>
    
    
More information about the Gluster-users
mailing list