[Gluster-users] Fwd: Gluster Volume Replication using 2 AWS instances on Autoscaling
bernhard glomm
bernhard.glomm at ecologic.eu
Thu Mar 13 17:25:08 UTC 2014
volume heal <vol-name> full
I meant (notice the full at the end)
(and very much sorry for the silly footer in my last post! :-/)
Sorry Vijay don't understand what you mean,
I had a wrong filesystem under one brick lately
(sort of "forgot" to mount the right disc before creating the gluster volume)
I did the steps: remove brick, change brick, add brick, heal full
and the new brick got populated quickly
All fine…
Best regards
Bernhard
On Mar 13, 2014, at 6:11 PM, Vijay Bellur <vbellur at redhat.com> wrote:
>
> Begin forwarded message:
>
> *From: *bernhard glomm <bernhard.glomm at ecologic.eu
> <mailto:bernhard.glomm at ecologic.eu>>
> *Subject: **Re: [Gluster-users] Gluster Volume Replication using 2 AWS
> instances on Autoscaling*
> *Date: *March 13, 2014 6:08:10 PM GMT+01:00
> *To: *Vijay Bellur <vbellur at redhat.com <mailto:vbellur at redhat.com>>
>
> ??? I thought replace-brick was not recommended at the moment
> in 3.4.2 on a replica 2 volume I use successfully:
replace-brick is not recommended for data migration. commit force just performs a volume topology update and does not perform any data migration.
-Vijay
>
> volume remove-brick <vol-name> replica 1 <brick-name> force
> # replace the old brick with the new one, mount another disk or what
> ever, than
> volume add-brick <vol-name> replica 2 <brick-name> force
> volume heal <vol-name>
>
> hth
>
> Bernhard
>
>
> On Mar 13, 2014, at 5:48 PM, Vijay Bellur <vbellur at redhat.com
> <mailto:vbellur at redhat.com>> wrote:
>
> On 03/13/2014 09:18 AM, Alejandro Planas wrote:
>> Hello,
>>
>> We have 2 AWS instances, 1 brick on each instance, one replicated volume
>> among both instances. When one of the instances fails completely and
>> autoscaling replaces it with a new one, we are having issues recreating
>> the replicated volume again.
>>
>> Can anyone provide some light on the gluster commands required to
>> include this new replacement instance (with one brick) as a member of
>> the replicated volume?
>>
>
> You can probably use:
>
> volume replace-brick <volname> <old-brick> <new-brick> commit force
>
> This will remove the old-brick from the volume and bring in new-brick to
> the volume. self-healing can then synchronize data to the new brick.
>
> Regards,
> Vijay
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140313/02d56a00/attachment.sig>
More information about the Gluster-users
mailing list