[Gluster-users] Advice for sizing a POC

Jake Davis jake at imapenguin.com
Tue Feb 21 21:12:55 UTC 2017


Thanks very much for the advice. I hadn't really considered disperse
volumes as I really liked the idea that recovery is much simpler in the
scenario were you're distributing/replicating whole files. I guess I need
to test both as you suggest.

Does memory size become an issue with a large number of bricks on a single
node? Is there an optimum memory/brick ratio?

On Sat, Feb 18, 2017 at 8:14 AM, Serkan Çoban <cobanserkan at gmail.com> wrote:

> With 1GB/file size you should definitely try JBOD with disperse volumes.
> Gluster can easily get 1GB/per node network throughput using disperse
> volumes.
>
> We use 26 disks/node without problems and planning to use 90 disk/node.
>
> I don't think you'll need SSD caching for sequential read heavy workload...
>
> Just test the workload with different disperse configurations to find
> the optimum for your workload.
>
>
> On Fri, Feb 17, 2017 at 7:54 PM, Jake Davis <jake at imapenguin.com> wrote:
> > Greetings, I'm trying to spec hardware for a proof of concept. I'm hoping
> > for a sanity check to see if I'm asking the right questions and making
> the
> > right assumptions.
> > I don't have real numbers for expected workload, but for our main use
> case,
> > we're likely talking a few hundred thousand files, read heavy, with
> average
> > file size around 1 GB. Fairly parallel access pattern.
> >
> > I've read elsewhere that the max recommended disk count for a RAID6
> array is
> > twelve. Is that per node, or per brick? i.e. if I have a number of 24 or
> 36
> > disk arrays attached to a single node, would it make sense to divide the
> > larger array into 2 or 3 bricks with 12 disk stripes, or do a want to
> limit
> > the brick count to one per node in this case?
> >
> > For FUSE clients, assuming one 12 disk RAID6 brick per node, in general,
> how
> > many nodes do I need in my cluster before I start meeting/exceeding the
> > throughput of a direct attached raid via NFS mount?
> >
> > RAM; is it always a case of the more, the merrier? Or is there some rule
> of
> > thumb for calculating return on investment there?
> >
> > Is there a scenario were adding a few SSD's to a node can increase the
> > performance of a spinning disk brick by acting as a read cache or some
> such?
> > Assuming non-ZFS.
> >
> > I've read that for highly parallel access, it might make more sense to
> use
> > JBOD with one brick per disk. Is that advice file size dependent? And
> What
> > question do I need to ask myself to determine how many of these single
> disk
> > bricks I want per-node?
> >
> > Many thanks!
> > -Jake
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170221/3a8b7844/attachment.html>


More information about the Gluster-users mailing list