[Gluster-users] migration operations: Stopping a migration

Eric epretorious at yahoo.com
Thu Sep 6 00:05:37 UTC 2012


I've created a distributed replicated volume:

> gluster> volume info
>  
> Volume Name: Repositories
> Type: Distributed-Replicate
> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.1.1:/srv/sda7
> Brick2: 192.168.1.2:/srv/sda7
> Brick3: 192.168.1.1:/srv/sdb7
> Brick4: 192.168.1.2:/srv/sdb7


...and begun migrating data from one brick to another as a PoC:

> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 192.168.1.1:/srv/sda8 start
> replace-brick started successfully
> 
> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 192.168.1.1:/srv/sda8 status
> Number of files migrated = 5147       Current file= /centos/5.8/os/x86_64/CentOS/gnome-pilot-conduits-2.0.13-7.el5.x86_64.rpm 
> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 192.168.1.1:/srv/sda8 status
> Number of files migrated = 24631        Migration complete 


After the migration is finished, though, the list of bricks is wrong:


> gluster> volume heal Repositories info                                                          
> Heal operation on volume Repositories has been successful
> 
> Brick 192.168.1.1:/srv/sda7
> Number of entries: 0
> 
> Brick 192.168.1.2:/srv/sda7
> Number of entries: 0
> 
> Brick 192.168.1.1:/srv/sdb7
> Number of entries: 0
> 
> Brick 192.168.1.2:/srv/sdb7
> Number of entries: 0

...and the XFS attributes are still intact on the old brick:

> [eric at sn1 ~]$ for x in sda7 sdb7 sda8 ; do sudo getfattr -m - /srv/$x 2> /dev/null ; done
> # file: srv/sda7
> trusted.afr.Repositories-client-0
> trusted.afr.Repositories-client-1
> trusted.afr.Repositories-io-threads
> trusted.afr.Repositories-replace-brick
> trusted.gfid
> trusted.glusterfs.dht
> trusted.glusterfs.pump-path
> trusted.glusterfs.volume-id
> 
> # file: srv/sdb7
> trusted.afr.Repositories-client-2
> trusted.afr.Repositories-client-3
> trusted.gfid
> trusted.glusterfs.dht
> trusted.glusterfs.volume-id
> 
> # file: srv/sda8
> trusted.afr.Repositories-io-threads
> trusted.afr.Repositories-replace-brick
> trusted.gfid
> trusted.glusterfs.volume-id


Have I missed a step? Or: Is this (i.e., clean-up) a bug or functionality that hasn't been implemented yet?

Eric Pretorious
Truckee, CA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120905/b2adb6c2/attachment.html>


More information about the Gluster-users mailing list