[Gluster-users] Re-provisioning a node and it's bricks

Eric epretorious at yahoo.com
Thu Sep 6 05:48:12 UTC 2012


I've created a distributed replicated volume:


> gluster> volume info
>  
> Volume Name: Repositories
> Type: Distributed-Replicate
> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.1.1:/srv/sda7
> Brick2: 192.168.1.2:/srv/sda7
> Brick3: 192.168.1.1:/srv/sdb7
> Brick4: 192.168.1.2:/srv/sdb7

...by allocating physical partitions on each HDD of each node for the volumes' bricks: e.g.,


> [eric at sn1 srv]$ mount | grep xfs
> /dev/sda7 on /srv/sda7 type xfs (rw)
> /dev/sdb7 on /srv/sdb7 type xfs (rw)
> /dev/sda8 on /srv/sda8 type xfs (rw)
> /dev/sdb8 on /srv/sdb8 type xfs (rw)
> /dev/sda9 on /srv/sda9 type xfs (rw)
> /dev/sdb9 on /srv/sdb9 type xfs (rw)
> /dev/sda10 on /srv/sda10 type xfs (rw)
> /dev/sdb10 on /srv/sdb10 type xfs (rw)

I plan to re-provision both nodes (e.g., convert them  from CentOS -> SLES) and need to preserve the data on the current bricks.

It seems to me that the procedure for this endeavor would be to: detach the node that will be re-provisioned; re-provision the node; add the node back to the trusted storage pool, and then; add the bricks back to the volume - *but* this plan fails at Step #1. i.e.,

 * When attempting to  detach the second node from the volume, the Console Manager 
   complains "Brick(s) with the peer 192.168.1.2 exist in cluster".
 * When attempting to remove the second node's bricks from the volume, the Console
   Manager complains "Bricks not from same subvol for replica".

Is this even feasible? I've already verified that bricks can be *added* to the volume (by adding two additional local partitions to the volume) but I'm not sure where to begin preparing the nodes for re-provisioning.

Eric Pretorious
Truckee, CA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120905/836618e4/attachment.html>


More information about the Gluster-users mailing list