[Gluster-users] Need help for production setup
Punit Dambiwal
hypunit at gmail.com
Tue Aug 19 01:45:54 UTC 2014
Hi Vijay,
The architecture is based on replica 2 not on replica 3...yes it's better i
will raise this issue in the Ovirt userlist...Thanks.
Thanks,
punit
On Mon, Aug 18, 2014 at 8:07 PM, Vijay Bellur <vbellur at redhat.com> wrote:
> On 08/18/2014 11:51 AM, Punit Dambiwal wrote:
>
>> Hi Vijay,
>>
>> Thanks for the updates..that means if we use replica=3...then there is
>> no need to user HW raid ??
>>
>>
> Yes, HW raid with replica 3 is not essential.
>
>
> As i want to use it with Ovirt with HA....would you mind to let me know
>> how i can achieve this ??
>>
>> I have some HA related concern about glusterfs with Ovirt...let say i
>> have 4 storage node with gluster bricks as below :-
>>
>> 1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have distributed
>> replicated architecture...
>>
>
> How do you plan to have replica 3 with 8 bricks?
>
> 2. Now attached this gluster storge to ovrit-engine with the following
>> mount point 10.10.10.2/vol1 <http://10.10.10.2/vol1>
>>
>> 3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to 10.10.10.7)
>> SPM is on 10.10.10.5...
>> 4. What happen if 10.10.10.2 will goes down.....can hypervisior host can
>> still access the storage ??
>>
>
> If mount has already happened, hypervisor hosts can still access storage.
> To provide HA for mount operation, you can use backup-volfile-server option
> as described in man for mount.glusterfs [1].
>
>
> 5. What happen if SPM goes down ???
>>
>>
> I am not too familiar about the implications of SPM going down. Seems like
> this question is more appropriate for ovirt mailing lists.
>
>
> Note :- What happen for point 4 &5 ,If storage and Compute both working
>> on the same server.
>>
>
> If storage & compute are on the same server, VM migration before a server
> goes offline would be necessary. A VM can continue to operate as long as
> the mount point on the compute node can reach other bricks that are online
> in a gluster volume. I would also recommend to test the implications of
> self-healing in your test setup after a failed node comes back online as
> the self-healing process can compete for compute cycles.
>
> -Vijay
>
>
> [1] https://github.com/gluster/glusterfs/blob/master/doc/mount.glusterfs.8
>
>
>
>> Thanks,
>> Punit
>>
>>
>> On Mon, Aug 18, 2014 at 1:44 PM, Vijay Bellur <vbellur at redhat.com
>> <mailto:vbellur at redhat.com>> wrote:
>>
>> On 08/18/2014 09:11 AM, Punit Dambiwal wrote:
>>
>> Hi Juan,
>>
>> Understand...but if i am using replica=3 then ?? As using the HW
>> raid
>> with commodity HDD will be not good choice...and if i choose HW
>> raid
>> with enterprise grade HDD then cost will be higher and then
>> there will
>> be no use to choose glusterfs for storage...
>>
>>
>> For replica 3, I don't think hardware RAID would be beneficial. HW
>> raid is recommended for replica 2 scenarios with gluster to provide
>> an additional degree of redundancy.
>>
>> -Vijay
>>
>>
>>
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140819/6e3d4354/attachment.html>
More information about the Gluster-users
mailing list