[Gluster-users] Fwd: Gluster Volume Replication using 2 AWS instances on Autoscaling
Vijay Bellur
vbellur at redhat.com
Thu Mar 13 17:11:32 UTC 2014
On 03/13/2014 10:39 PM, bernhard glomm wrote:
>
> ------------------------------------------------------------------------
> *Ecologic Institute* *Bernhard Glomm*
> IT Administration
>
> Phone: +49 (30) 86880 134
> Fax: +49 (30) 86880 100
> Skype: bernhard.glomm.ecologic
>
> Website: <http://ecologic.eu> | Video:
> <http://www.youtube.com/v/hZtiK04A9Yo> | Newsletter:
> <http://ecologic.eu/newsletter/subscribe> | Facebook:
> <http://www.facebook.com/Ecologic.Institute> | Linkedin:
> <http://www.linkedin.com/company/ecologic-institute-berlin-germany> |
> Twitter: <http://twitter.com/EcologicBerlin> | YouTube:
> <http://www.youtube.com/user/EcologicInstitute> | Google+:
> <http://plus.google.com/113756356645020994482>
> Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717
> Berlin | Germany
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> DE811963464
> Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> ------------------------------------------------------------------------
>
>
> Begin forwarded message:
>
> *From: *bernhard glomm <bernhard.glomm at ecologic.eu
> <mailto:bernhard.glomm at ecologic.eu>>
> *Subject: **Re: [Gluster-users] Gluster Volume Replication using 2 AWS
> instances on Autoscaling*
> *Date: *March 13, 2014 6:08:10 PM GMT+01:00
> *To: *Vijay Bellur <vbellur at redhat.com <mailto:vbellur at redhat.com>>
>
> ??? I thought replace-brick was not recommended at the moment
> in 3.4.2 on a replica 2 volume I use successfully:
replace-brick is not recommended for data migration. commit force just
performs a volume topology update and does not perform any data migration.
-Vijay
>
> volume remove-brick <vol-name> replica 1 <brick-name> force
> # replace the old brick with the new one, mount another disk or what
> ever, than
> volume add-brick <vol-name> replica 2 <brick-name> force
> volume heal <vol-name>
>
> hth
>
> Bernhard
>
>
> On Mar 13, 2014, at 5:48 PM, Vijay Bellur <vbellur at redhat.com
> <mailto:vbellur at redhat.com>> wrote:
>
> On 03/13/2014 09:18 AM, Alejandro Planas wrote:
>> Hello,
>>
>> We have 2 AWS instances, 1 brick on each instance, one replicated volume
>> among both instances. When one of the instances fails completely and
>> autoscaling replaces it with a new one, we are having issues recreating
>> the replicated volume again.
>>
>> Can anyone provide some light on the gluster commands required to
>> include this new replacement instance (with one brick) as a member of
>> the replicated volume?
>>
>
> You can probably use:
>
> volume replace-brick <volname> <old-brick> <new-brick> commit force
>
> This will remove the old-brick from the volume and bring in new-brick to
> the volume. self-healing can then synchronize data to the new brick.
>
> Regards,
> Vijay
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list