<div dir="ltr">
<p>
Greetings, I'm trying to spec hardware for a proof of concept. I'm hoping for a sanity check to see if I'm asking the right questions and making the right assumptions.<br>
I don't have real numbers for expected workload, but for our main use case, we're likely talking a few hundred thousand files, read heavy, with average file size around 1 GB. Fairly parallel access pattern.
</p>
<p>
I've read elsewhere that the max recommended disk count for a RAID6 array is twelve. Is that per node, or per brick? i.e. if I have a number of 24 or 36 disk arrays attached to a single node, would it make sense to divide the larger array into 2 or 3 bricks with 12 disk stripes, or do a want to limit the brick count to one per node in this case?
</p>
<p>
For FUSE clients, assuming one 12 disk RAID6 brick per node, in general, how many nodes do I need in my cluster before I start meeting/exceeding the throughput of a direct attached raid via NFS mount?
</p>
<p>
RAM; is it always a case of the more, the merrier? Or is there some rule of thumb for calculating return on investment there?
</p>
<p>
Is there a scenario were adding a few SSD's to a node can increase the performance of a spinning disk brick by acting as a read cache or some such? Assuming non-ZFS.
</p>
<p>
I've read that for highly parallel access, it might make more sense to use JBOD with one brick per disk. Is that advice file size dependent? And What question do I need to ask myself to determine how many of these single disk bricks I want per-node?
</p>
<p>
Many thanks!<br>
-Jake
</p>
</div>