<div dir="ltr"><div>Thanks very much for the advice. I hadn't really considered
disperse volumes as I really liked the idea that recovery is much
simpler in the scenario were you're distributing/replicating whole
files. I guess I need to test both as you suggest.<br><br></div>Does memory size become an issue with a large number of bricks on a single node? Is there an optimum memory/brick ratio?</div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Feb 18, 2017 at 8:14 AM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">With 1GB/file size you should definitely try JBOD with disperse volumes.<br>
Gluster can easily get 1GB/per node network throughput using disperse volumes.<br>
<br>
We use 26 disks/node without problems and planning to use 90 disk/node.<br>
<br>
I don't think you'll need SSD caching for sequential read heavy workload...<br>
<br>
Just test the workload with different disperse configurations to find<br>
the optimum for your workload.<br>
<div><div class="h5"><br>
<br>
On Fri, Feb 17, 2017 at 7:54 PM, Jake Davis <<a href="mailto:jake@imapenguin.com">jake@imapenguin.com</a>> wrote:<br>
> Greetings, I'm trying to spec hardware for a proof of concept. I'm hoping<br>
> for a sanity check to see if I'm asking the right questions and making the<br>
> right assumptions.<br>
> I don't have real numbers for expected workload, but for our main use case,<br>
> we're likely talking a few hundred thousand files, read heavy, with average<br>
> file size around 1 GB. Fairly parallel access pattern.<br>
><br>
> I've read elsewhere that the max recommended disk count for a RAID6 array is<br>
> twelve. Is that per node, or per brick? i.e. if I have a number of 24 or 36<br>
> disk arrays attached to a single node, would it make sense to divide the<br>
> larger array into 2 or 3 bricks with 12 disk stripes, or do a want to limit<br>
> the brick count to one per node in this case?<br>
><br>
> For FUSE clients, assuming one 12 disk RAID6 brick per node, in general, how<br>
> many nodes do I need in my cluster before I start meeting/exceeding the<br>
> throughput of a direct attached raid via NFS mount?<br>
><br>
> RAM; is it always a case of the more, the merrier? Or is there some rule of<br>
> thumb for calculating return on investment there?<br>
><br>
> Is there a scenario were adding a few SSD's to a node can increase the<br>
> performance of a spinning disk brick by acting as a read cache or some such?<br>
> Assuming non-ZFS.<br>
><br>
> I've read that for highly parallel access, it might make more sense to use<br>
> JBOD with one brick per disk. Is that advice file size dependent? And What<br>
> question do I need to ask myself to determine how many of these single disk<br>
> bricks I want per-node?<br>
><br>
> Many thanks!<br>
> -Jake<br>
><br>
><br>
</div></div>> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</blockquote></div><br></div>