[Gluster-users] Re-provisioning a node and it's bricks

Eric epretorious at yahoo.com
Sat Sep 8 23:50:11 UTC 2012


FYI: I don't know why, but the second server node required that the volume 
be stopped and restarted before the bricks would be marked as active.


HTH,

Eric P.
Truckee, CA



>________________________________
> From: Eric <epretorious at yahoo.com>
>To: "gluster-users at gluster.org" <gluster-users at gluster.org> 
>Sent: Saturday, September 8, 2012 10:56 AM
>Subject: Re: [Gluster-users] Re-provisioning a node and it's bricks
> 
>
>There's a document describing the procedure for Gluster 3.2.x: http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server
>
>The procedure for Gluster 3.3.0 is _remarkably_ simple:
>
>1. Start the glusterd daemon on the newly re-provisioned server node.
>2. Probe the surviving server node from the recovered/re-provisioned server node.
>3. Restart the glusterd daemon on the  recovered/re-provisioned server node.
>
>NOTES:
>1. Do NOT remove the extended file system attributes from the bricks on the server node being 
>   recovered/re-provisioned during recovery/re-provisioning.
>2. Verify that any/all partitions that are used as bricks are mounted before performing these steps.
>3. Verify that any/all iptables firewall rules that are
 necessary for Gluster to communicate have 
>   been added  before performing these steps.
>
>HTH,
>Eric Pretorious
>
>Truckee, CA
>
>
>
>
>>________________________________
>> From: Kent Liu <kurlez at outlook.com>
>>To: 'John Mark Walker' <johnmark at redhat.com>; 'Eric' <epretorious at yahoo.com> 
>>Cc: gluster-users at gluster.org 
>>Sent: Thursday, September 6, 2012 7:00 PM
>>Subject: RE: [Gluster-users] Re-provisioning a node and it's bricks
>> 
>>
>>It would be great if any suggestions from IRC can be shared on this list. Eric’s question is a common requirement.
>> 
>>Thanks,
>>Kent
>> 
>>From:gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of John Mark Walker
>>Sent: Thursday, September 06, 2012 3:02 PM
>>To: Eric
>>Cc: gluster-users at gluster.org
>>Subject: Re: [Gluster-users] Re-provisioning a node and it's bricks
>> 
>>Eric - was good to see you in San Diego. Glad to see you on the list. 
>> 
>>I would recommend trying the IRC channel tomorrow morning. Should be someone there who can help you. 
>> 
>>-JM
>> 
>>
>>________________________________
>>
>>I've created a distributed replicated volume:
>>>> gluster> volume info
>>>>  
>>>> Volume Name: Repositories
>>>> Type: Distributed-Replicate
>>>> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
>>>> Status:
 Started
>>>> Number of Bricks: 2 x 2 = 4
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: 192.168.1.1:/srv/sda7
>>>> Brick2: 192.168.1.2:/srv/sda7
>>>> Brick3: 192.168.1.1:/srv/sdb7
>>>> Brick4: 192.168.1.2:/srv/sdb7
>>>
>>>...by allocating physical partitions on each HDD of each node for the volumes' bricks: e.g.,
>>>
>>>> [eric at sn1 srv]$ mount | grep xfs
>>>> /dev/sda7 on /srv/sda7 type xfs (rw)
>>>> /dev/sdb7 on /srv/sdb7 type xfs (rw)
>>>> /dev/sda8 on /srv/sda8 type xfs (rw)
>>>> /dev/sdb8 on /srv/sdb8 type xfs (rw)
>>>> /dev/sda9 on /srv/sda9 type xfs (rw)
>>>> /dev/sdb9
 on /srv/sdb9 type xfs (rw)
>>>> /dev/sda10 on /srv/sda10 type xfs (rw)
>>>> /dev/sdb10 on /srv/sdb10 type xfs (rw)
>>>
>>>I plan to re-provision both nodes (e.g., convert them  from CentOS -> SLES) and need to preserve the data on the current bricks.
>>>
>>>It seems to me that the procedure for this endeavor would be to: detach the node that will be re-provisioned; re-provision the node; add the node back to the trusted storage pool, and then; add the bricks back to the volume - *but* this plan fails at Step #1. i.e.,
>>>
>>> * When attempting to detach the second node from the volume, the Console Manager 
>>>   complains "Brick(s) with the peer 192.168.1.2 exist in cluster".
>>> * When attempting to remove the second node's bricks from the volume, the Console 
>>>   Manager complains "Bricks not from same subvol for replica".
>>>
>>>Is this even feasible? I've already verified that bricks can be *added* to
 the volume (by adding two additional local partitions to the volume) but I'm not sure where to begin preparing the nodes for re-provisioning.
>>>
>>>Eric Pretorious
>>>Truckee, CA
>>>
>>>_______________________________________________
>>>Gluster-users mailing list
>>>Gluster-users at gluster.org
>>>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> 
>>
>>
>_______________________________________________
>Gluster-users mailing list
>Gluster-users at gluster.org
>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120908/227d62c5/attachment.html>


More information about the Gluster-users mailing list