[Gluster-users] Re-provisioning a node and it's bricks
epretorious at yahoo.com
Sat Sep 8 17:56:32 UTC 2012
There's a document describing the procedure for Gluster 3.2.x:http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server
The procedure for Gluster 3.3.0 is _remarkably_ simple:
1. Start the glusterd daemon on the newly re-provisioned server node.
2. Probe the surviving server node from the recovered/re-provisioned server node.
3. Restart the glusterd daemon on the recovered/re-provisioned server node.
1. Do NOT remove the extended file system attributes from the bricks on the server node being
recovered/re-provisioned during recovery/re-provisioning.
2. Verify that any/all partitions that are used as bricks are mounted before performing these steps.
3. Verify that any/all iptables firewall rules that are necessary for Gluster to communicate have
been added before performing these steps.
> From: Kent Liu <kurlez at outlook.com>
>To: 'John Mark Walker' <johnmark at redhat.com>; 'Eric' <epretorious at yahoo.com>
>Cc: gluster-users at gluster.org
>Sent: Thursday, September 6, 2012 7:00 PM
>Subject: RE: [Gluster-users] Re-provisioning a node and it's bricks
>It would be great if any suggestions from IRC can be shared on this list. Eric’s question is a common requirement.
>From:gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of John Mark Walker
>Sent: Thursday, September 06, 2012 3:02 PM
>Cc: gluster-users at gluster.org
>Subject: Re: [Gluster-users] Re-provisioning a node and it's bricks
>Eric - was good to see you in San Diego. Glad to see you on the list.
>I would recommend trying the IRC channel tomorrow morning. Should be someone there who can help you.
>I've created a distributed replicated volume:
>>> gluster> volume info
>>> Volume Name: Repositories
>>> Type: Distributed-Replicate
>>> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
>>> Status: Started
>>> Number of Bricks: 2 x 2 = 4
>>> Transport-type: tcp
>>> Brick1: 192.168.1.1:/srv/sda7
>>> Brick2: 192.168.1.2:/srv/sda7
>>> Brick3: 192.168.1.1:/srv/sdb7
>>> Brick4: 192.168.1.2:/srv/sdb7
>>...by allocating physical partitions on each HDD of each node for the volumes' bricks: e.g.,
>>> [eric at sn1 srv]$ mount | grep xfs
>>> /dev/sda7 on /srv/sda7 type xfs (rw)
>>> /dev/sdb7 on /srv/sdb7 type xfs (rw)
>>> /dev/sda8 on /srv/sda8 type xfs (rw)
>>> /dev/sdb8 on /srv/sdb8 type xfs (rw)
>>> /dev/sda9 on /srv/sda9 type xfs (rw)
>>> /dev/sdb9 on /srv/sdb9 type xfs (rw)
>>> /dev/sda10 on /srv/sda10 type xfs (rw)
>>> /dev/sdb10 on /srv/sdb10 type xfs (rw)
>>I plan to re-provision both nodes (e.g., convert them from CentOS -> SLES) and need to preserve the data on the current bricks.
>>It seems to me that the procedure for this endeavor would be to: detach the node that will be re-provisioned; re-provision the node; add the node back to the trusted storage pool, and then; add the bricks back to the volume - *but* this plan fails at Step #1. i.e.,
>> * When attempting to detach the second node from the volume, the Console Manager
>> complains "Brick(s) with the peer 192.168.1.2 exist in cluster".
>> * When attempting to remove the second node's bricks from the volume, the Console
>> Manager complains "Bricks not from same subvol for replica".
>>Is this even feasible? I've already verified that bricks can be *added* to the volume (by adding two additional local partitions to the volume) but I'm not sure where to begin preparing the nodes for re-provisioning.
>>Gluster-users mailing list
>>Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users