[Gluster-users] deployment

Chandranshu . chandranshu at gmail.com
Wed Sep 17 11:48:09 UTC 2008


Hi Paolo,

Take a look at
http://www.gluster.org/docs/index.php/Understanding_Unify_Translator .
The diagram on this page indicates that the AFRs / Stripe, etc should be
just below the unify.  However, I *think* it is more of a suggestion than a
binding rule as I don't see anything the Volume descriptor syntax to prevent
you from doing it the other way round.

On your observations of poor performance on 100Mbps interconnect, I'm facing
the same issues. In particular, the performance starts degrading very fast
when the file sizes drop below 64K.

We'll be doing file system tweaks some time this week and will post the
results if they are any good.

Regards
Chandranshu

On Wed, Sep 17, 2008 at 4:55 PM, <gluster-users-request at gluster.org> wrote:

> Message: 5
> Date: Wed, 17 Sep 2008 13:25:11 +0200
> From: "Paolo Supino" <paolo.supino at gmail.com>
> Subject: Re: [Gluster-users] deployment
> To: "Keith Freedman" <freedman at freeformit.com>
> Cc: gluster-users at gluster.org
> Message-ID:
>        <2e94257a0809170425y5805854eq43b3be8fb4d74417 at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi Keith
>
>  There's a section on the website that gives the configuration for a
> unify/AFR but doesn't say whether AFR goes above or below unify. At the
> moment I don't need the whole 2TB and I can live with half of it, but I
> might down the road need the extra space. If and when that happens is it
> possible to break the unify/AFR and move everything to only unify without
> deleting data (not that will be an obstacle, see below)?
>  Can anyone answer the question: does AFR goes above or below unify?
>
>  I don't think that the data stored on the gluster volume will be mission
> critical: it 's genomic data that is being processed on the cluster. I
> think
> that the worst case scenario in case of brick loss will be that a few hours
> of processing will be lost.
>
>
>
> --
> TIA
> Paolo
>
>
>
>
> On Wed, Sep 17, 2008 at 12:22 PM, Keith Freedman <freedman at freeformit.com
> >wrote:
>
> > Some other things to consider:
> >
> > the unify is a good idea to make use of all your space.  However, with
> that
> > many nodes, your probability of a node failing is high.
> > so just be aware, if one of the nodes fails, whatever data stored on that
> > node will be lost.
> >
> > If you dont need the full 2TB's then I'd suggest using AFR.
> >
> > I *think* you can run afr UNDER unify, so you would create one unify
> brick
> > with half the machines, another with the other half and AFR across them.
> > but I'm not sure.. it may be that AFR has to be above Unify
> >
> > of course, if you don't care about the data really, i.e. it's all backup
> or
> > working space or temp files, etc.. then no need to AFR them.
> >
> > Keith
> >
> > At 01:52 AM 9/17/2008, Paolo Supino wrote:
> >
> >> Hi Raghavendra
> >>
> >>  I like your reply and definitely will give it a try. There's nothing I
> >> hate mre than wasted infrastructure ...
> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >> TIA
> >> Paolo
> >>
> >>
> >> On Wed, Sep 17, 2008 at 8:13 AM, Raghavendra G <<mailto:
> >> raghavendra.hg at gmail.com>raghavendra.hg at gmail.com> wrote:
> >> Hi Paolo,
> >>
> >> One of the configurations is to have glusterfs as server on each of the
> >> nodes exporting a brick. Each node should also have glusterfs  running
> as
> >> client having unify translator, unifying all the servers.
> >>
> >> regards,
> >>
> >> On Tue, Sep 16, 2008 at 10:34 PM, Paolo Supino <<mailto:
> >> paolo.supino at gmail.com>paolo.supino at gmail.com> wrote:
> >> Hi
> >>
> >>  I have a small HPC cluster of 36 nodes (1 head, 35 compute). Each of
> the
> >> nodes has a 1 65GB (~ 2.2TB combined) volume that isn't being used. I
> >> thought of using a parallel filesystem in order to put this unused space
> >> into good use. The configuration I had in mind is: All nodes will act a
> >> bricks and all nodes will act as clients. I have no experience with
> Gluster
> >> and want to know what people on the mailing list thought of the idea,
> >> deployment scenario, pros and cons etc ... Any reply will help :-)
> >>
> >>
> >>
> >> --
> >> TIA
> >> Paolo
> >>
> >>
> >> _______________________________________________
> >> Gluster-users mailing list
> >> <mailto:Gluster-users at gluster.org>Gluster-users at gluster.org
> >> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> >>
> >>
> >>
> >>
> >> --
> >> Raghavendra G
> >>
> >> A centipede was happy quite, until a toad in fun,
> >> Said, "Prey, which leg comes after which?",
> >> This raised his doubts to such a pitch,
> >> He fell flat into the ditch,
> >> Not knowing how to run.
> >> -Anonymous
> >>
> >>
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> >>
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://zresearch.com/pipermail/gluster-users/attachments/20080917/b97b508c/attachment.htm
>
> ------------------------------
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
> End of Gluster-users Digest, Vol 5, Issue 14
> ********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080917/c929bc8a/attachment.html>


More information about the Gluster-users mailing list