[Gluster-users] 3.6: Migrate one brick's data to multiple remaining bricks
Iain Milne
glusterfs at noognet.org
Fri Jan 16 11:32:56 UTC 2015
> if you don't want glusterfs to migrate your data, there's always an
option --
> force for remove-brick, which will remove brick without data migration.
So, you
> will be able to backup/transfer/move data yourself, before reformatting
to zfs.
> I'm actually prefer to migrate data myself, since glusterfs does not
have much
> control over the migration process (only status command is available).
If the migration was done manually, the users would lose access to 1/3rd
of their data while it happened (?), and that wouldn't go down too well
:-)
The volume itself is backed up (incrementally) on a nightly basis.
>
>
> On Fri, Jan 16, 2015 at 12:59 PM, Iain Milne <glusterfs at noognet.org> wrote:
>
>
> We're using 3.6 with three servers; one brick each in a (100TB+)
> distributed volume. They all use XFS.
>
> We'd like to move to ZFS, without user interruption.
>
> Is it as simple (with 3.6) as issuing the remove-brick command for the
> first server, waiting for its data to migrate to the other two
> (automatically?), reformatting as ZFS, then adding the brick again.
> Rinse
> and repeat for the other two servers?
>
> Any two servers currently have enough capacity to hold the data from
> all
> three.
>
> I've struggled to find much documentation on this, except for the
> following snippet which came from
> https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-
> notes/3.6.0.md
>
> "Prior to 3.6, volume remove-brick <volname> CLI would remove the
> brick
> from the volume without performing any data migration. Now the
> default
> behavior has been changed to perform data migration when this
> command is
> issued."
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
More information about the Gluster-users
mailing list