[Gluster-users] deployment

Raghavendra G raghavendra.hg at gmail.com
Wed Sep 17 11:43:02 UTC 2008


Paolo,
Comments are inlined.

On Wed, Sep 17, 2008 at 3:25 PM, Paolo Supino <paolo.supino at gmail.com>wrote:

> Hi Keith
>
>   There's a section on the website that gives the configuration for a
> unify/AFR but doesn't say whether AFR goes above or below unify. At the
> moment I don't need the whole 2TB and I can live with half of it, but I
> might down the road need the extra space. If and when that happens is it
> possible to break the unify/AFR and move everything to only unify without
> deleting data (not that will be an obstacle, see below)?
>

Its recommended while adding new children to unify, the back end filesystems
of children to be empty, hence you've to delete the data.


>   Can anyone answer the question: does AFR goes above or below unify?
>

It can go either way.
1. If  you are using  AFR over unify, then you should have two unify
translators, each with n/2 children and AFR should have these two unify
translators as its children
2. If you are using unify over AFR, then you should have n/2 AFR
translators, each having 2 children and unify should have all the n/2 AFRs
as its children.

where n is the number of nodes available for glusterfs. Though the option 2
leads to huge volume specification file relative to option 1 . I am not sure
about any performance setup between these two configurations. Krishna/Avati,
can you confirm this?


>
>   I don't think that the data stored on the gluster volume will be mission
> critical: it 's genomic data that is being processed on the cluster. I think
> that the worst case scenario in case of brick loss will be that a few hours
> of processing will be lost.
>
>
>
> --
> TIA
> Paolo
>
>
>
>
> On Wed, Sep 17, 2008 at 12:22 PM, Keith Freedman <freedman at freeformit.com>wrote:
>
>> Some other things to consider:
>>
>> the unify is a good idea to make use of all your space.  However, with
>> that many nodes, your probability of a node failing is high.
>> so just be aware, if one of the nodes fails, whatever data stored on that
>> node will be lost.
>>
>> If you dont need the full 2TB's then I'd suggest using AFR.
>>
>> I *think* you can run afr UNDER unify, so you would create one unify brick
>> with half the machines, another with the other half and AFR across them.
>> but I'm not sure.. it may be that AFR has to be above Unify
>>
>> of course, if you don't care about the data really, i.e. it's all backup
>> or working space or temp files, etc.. then no need to AFR them.
>>
>> Keith
>>
>> At 01:52 AM 9/17/2008, Paolo Supino wrote:
>>
>>> Hi Raghavendra
>>>
>>>  I like your reply and definitely will give it a try. There's nothing I
>>> hate mre than wasted infrastructure ...
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> TIA
>>> Paolo
>>>
>>>
>>> On Wed, Sep 17, 2008 at 8:13 AM, Raghavendra G <<mailto:
>>> raghavendra.hg at gmail.com>raghavendra.hg at gmail.com> wrote:
>>> Hi Paolo,
>>>
>>> One of the configurations is to have glusterfs as server on each of the
>>> nodes exporting a brick. Each node should also have glusterfs  running as
>>> client having unify translator, unifying all the servers.
>>>
>>> regards,
>>>
>>> On Tue, Sep 16, 2008 at 10:34 PM, Paolo Supino <<mailto:
>>> paolo.supino at gmail.com>paolo.supino at gmail.com> wrote:
>>> Hi
>>>
>>>  I have a small HPC cluster of 36 nodes (1 head, 35 compute). Each of the
>>> nodes has a 1 65GB (~ 2.2TB combined) volume that isn't being used. I
>>> thought of using a parallel filesystem in order to put this unused space
>>> into good use. The configuration I had in mind is: All nodes will act a
>>> bricks and all nodes will act as clients. I have no experience with Gluster
>>> and want to know what people on the mailing list thought of the idea,
>>> deployment scenario, pros and cons etc ... Any reply will help :-)
>>>
>>>
>>>
>>> --
>>> TIA
>>> Paolo
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> <mailto:Gluster-users at gluster.org>Gluster-users at gluster.org
>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>>
>>> --
>>> Raghavendra G
>>>
>>> A centipede was happy quite, until a toad in fun,
>>> Said, "Prey, which leg comes after which?",
>>> This raised his doubts to such a pitch,
>>> He fell flat into the ditch,
>>> Not knowing how to run.
>>> -Anonymous
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>
>>
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>


-- 
Raghavendra G

A centipede was happy quite, until a toad in fun,
Said, "Prey, which leg comes after which?",
This raised his doubts to such a pitch,
He fell flat into the ditch,
Not knowing how to run.
-Anonymous
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080917/508507f3/attachment.html>


More information about the Gluster-users mailing list