[Gluster-users] 3.6: Migrate one brick's data to multiple remaining bricks

Anatoly Pugachev matorola at gmail.com
Fri Jan 16 11:04:23 UTC 2015


Iain,

if you don't want glusterfs to migrate your data, there's always an option
--force for remove-brick, which will remove brick without data migration.
So, you will be able to backup/transfer/move data yourself, before
reformatting to zfs. I'm actually prefer to migrate data myself, since
glusterfs does not have much control over the migration process (only
status command is available).

On Fri, Jan 16, 2015 at 12:59 PM, Iain Milne <glusterfs at noognet.org> wrote:

> We're using 3.6 with three servers; one brick each in a (100TB+)
> distributed volume. They all use XFS.
>
> We'd like to move to ZFS, without user interruption.
>
> Is it as simple (with 3.6) as issuing the remove-brick command for the
> first server, waiting for its data to migrate to the other two
> (automatically?), reformatting as ZFS, then adding the brick again. Rinse
> and repeat for the other two servers?
>
> Any two servers currently have enough capacity to hold the data from all
> three.
>
> I've struggled to find much documentation on this, except for the
> following snippet which came from
>
> https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-notes/3.6.0.md
>
> "Prior to 3.6, volume remove-brick <volname> CLI would remove the brick
> from the volume without performing any data migration. Now the default
> behavior has been changed to perform data migration when this command is
> issued."
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150116/98ccb1ba/attachment.html>


More information about the Gluster-users mailing list