[Gluster-users] Re-provisioning a node and it's bricks

Kent Liu kurlez at outlook.com
Fri Sep 7 02:00:50 UTC 2012


It would be great if any suggestions from IRC can be shared on this list. Eric’s question is a common requirement.

 

Thanks,

Kent

 

From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of John Mark Walker
Sent: Thursday, September 06, 2012 3:02 PM
To: Eric
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] Re-provisioning a node and it's bricks

 

Eric - was good to see you in San Diego. Glad to see you on the list. 

 

I would recommend trying the IRC channel tomorrow morning. Should be someone there who can help you. 

 

-JM

 

  _____  

I've created a distributed replicated volume:

> gluster> volume info
>  
> Volume Name: Repositories
> Type: Distributed-Replicate
> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.1.1:/srv/sda7
> Brick2: 192.168.1.2:/srv/sda7
> Brick3: 192.168.1.1:/srv/sdb7
> Brick4: 192.168.1.2:/srv/sdb7


...by allocating physical partitions on each HDD of each node for the volumes' bricks: e.g.,


> [eric at sn1 srv]$ mount | grep xfs
> /dev/sda7 on /srv/sda7 type xfs (rw)
> /dev/sdb7 on /srv/sdb7 type xfs (rw)
> /dev/sda8 on /srv/sda8 type xfs (rw)
> /dev/sdb8 on /srv/sdb8 type xfs (rw)
> /dev/sda9 on /srv/sda9 type xfs (rw)
> /dev/sdb9 on /srv/sdb9 type xfs (rw)
> /dev/sda10 on /srv/sda10 type xfs (rw)
> /dev/sdb10 on /srv/sdb10 type xfs (rw)

I plan to re-provision both nodes (e.g., convert them  from CentOS -> SLES) and need to preserve the data on the current bricks.

It seems to me that the procedure for this endeavor would be to: detach the node that will be re-provisioned; re-provision the node; add the node back to the trusted storage pool, and then; add the bricks back to the volume - *but* this plan fails at Step #1. i.e.,

 * When attempting to detach the second node from the volume, the Console Manager 
   complains "Brick(s) with the peer 192.168.1.2 exist in cluster".
 * When attempting to remove the second node's bricks from the volume, the Console 
   Manager complains "Bricks not from same subvol for replica".

Is this even feasible? I've already verified that bricks can be *added* to the volume (by adding two additional local partitions to the volume) but I'm not sure where to begin preparing the nodes for re-provisioning.

Eric Pretorious
Truckee, CA


_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120907/6d67ed8b/attachment.html>


More information about the Gluster-users mailing list