[Gluster-users] FailOver
Punit Dambiwal
hypunit at gmail.com
Tue Oct 28 02:33:04 UTC 2014
Hi Vijay,
Thanks....is "replace-brick commit force" also can possible through Ovirt
Admin Portal or do i need to use the command line to run this command ??
Also i am looking for HA for glusterfs...let me explain
my architecture more :-
1. I have 4 node glusterfs cluster....and same 4 nodes i am using as Ovirt
HV nodes (storage + compute).
2. Now if i mount it in Ovirt...it will mount on one node ip address....and
if this node goes down...all my VM will pause...
3. I want to use HA or LB through CTDB or HA proxy...so even any node goes
down the VM will not affect...
4. I am ready to put two additional node for HA/LB purpose.....
Please suggest me good way to achieve this....
On Mon, Oct 27, 2014 at 6:19 PM, Vijay Bellur <vbellur at redhat.com> wrote:
> On 10/23/2014 01:35 PM, Punit Dambiwal wrote:
>
>> On Mon, Oct 13, 2014 at 11:54 AM, Punit Dambiwal <hypunit at gmail.com
>> <mailto:hypunit at gmail.com>> wrote:
>>
>> Hi,
>>
>> I have one question regarding the gluster failover...let me
>> explain my current architecture,i am using Ovirt with gluster...
>>
>> 1. One Ovirt Engine (Ovirt 3.4)
>> 2. 4 * Ovirt Node as well as Gluster storage node...with 12
>> brick in one node...(Gluster Version 3.5)
>> 3. All 4 node in distributed replicated structure with replica
>> level =2...
>> 4. I have one spare node with 12 brick for Failover purpose..
>>
>> Now there is two questions :-
>>
>> 1. If any of brick failed...how i can failover this brick...how
>> to remove the failed brick and replace with another brick....??
>> Do i need to replace the whole node or i can replace the single
>> brick ??
>>
>>
> Failure of a single brick can be addressed by performing "replace-brick
> commit force" to replace the failed brick with a new brick and then trigger
> self-heal to rebuild data on the replace brick.
>
> 2. If one of the whole node with 12 brick down and can not come
>> up...how i can replace it with the new one....do i need to add
>> two node to main the replication level... ??
>>
>>
> You can add a replacement node to the cluster and use "replace-brick
> commit force" to adjust the volume topology. Self-healing will rebuild data
> on the new node. You might want to replace one/few bricks at a time to
> ensure that your servers do not get bogged down by self-healing.
>
> -Vijay
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141028/3b20b10f/attachment.html>
More information about the Gluster-users
mailing list