<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Yes to the 10 GB NICS (they are already on the servers).</p>
<p>Nice idea with the SSDs, but I do not have a HW RAID card on
these servers or the possibility to get / install one.</p>
<p>What I do have is an extra SSD disk per server which I plan to
use as LVM cache for the bricks (Maybe just 1 disk, maybe 2 with
SW RAID 1). I still need to test how LVM / gluster are going to
handle the failure of the cache disk.</p>
<p>Thanks!<br>
</p>
<div class="moz-cite-prefix">On 6/6/19 19:07, Vincent Royer wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAL4nCoOMkNb2OE1CFAPW21oa-qdr96TAwTuqu6agO5Yi_zstOQ@mail.gmail.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<div dir="ltr">What if you have two fast 2TB SSDs per server in
hardware RAID 1, 3 hosts in replica 3. Dual 10gb enterprise
nics. This would end up being a single 2TB volume, correct?
Seems like that would offer great speed and have pretty decent
survivability. </div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, Jun 5, 2019 at 11:54
PM Hu Bert <<a href="mailto:revirii@googlemail.com"
moz-do-not-send="true">revirii@googlemail.com</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Good
morning,<br>
<br>
my comment won't help you directly, but i thought i'd send it
anyway...<br>
<br>
Our first glusterfs setup had 3 servers withs 4 disks=bricks
(10TB,<br>
JBOD) each. Was running fine in the beginning, but then 1 disk
failed.<br>
The following heal took ~1 month, with a bad performance
(quite high<br>
IO). Shortly after the heal hat finished another disk failed
-> same<br>
problems again. Not funny.<br>
<br>
For our new system we decided to use 3 servers with 10 disks
(10 TB)<br>
each, but now the 10 disks in a SW RAID 10 (well, we split the
10<br>
disks into 2 SW RAID 10, each of them is a brick, we have 2
gluster<br>
volumes). A lot of disk space "wasted", with this type of SW
RAID and<br>
a replicate 3 setup, but we wanted to avoid the "healing takes
a long<br>
time with bad performance" problems. Now mdadm takes care of<br>
replicating data, glusterfs should always see "good" bricks.<br>
<br>
And the decision may depend on what kind of data you have.
Many small<br>
files, like tens of millions? Or not that much, but bigger
files? I<br>
once watched a video (i think it was this one:<br>
<a href="https://www.youtube.com/watch?v=61HDVwttNYI"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://www.youtube.com/watch?v=61HDVwttNYI</a>).
Recommendation there:<br>
RAID 6 or 10 for small files, for big files... well, already 2
years<br>
"old" ;-)<br>
<br>
As i said, this won't help you directly. You have to identify
what's<br>
most important for your scenario; as you said, high
performance is not<br>
an issue - if this is true even when you have slight
performance<br>
issues after a disk fail then ok. My experience so far: the
bigger and<br>
slower the disks are and the more data you have -> healing
will hurt<br>
-> try to avoid this. If the disks are small and fast
(SSDs), healing<br>
will be faster -> JBOD is an option.<br>
<br>
<br>
hth,<br>
Hubert<br>
<br>
Am Mi., 5. Juni 2019 um 11:33 Uhr schrieb Eduardo Mayoral <<a
href="mailto:emayoral@arsys.es" target="_blank"
moz-do-not-send="true">emayoral@arsys.es</a>>:<br>
><br>
> Hi,<br>
><br>
> I am looking into a new gluster deployment to replace
an ancient one.<br>
><br>
> For this deployment I will be using some repurposed
servers I<br>
> already have in stock. The disk specs are 12 * 3 TB SATA
disks. No HW<br>
> RAID controller. They also have some SSD which would be
nice to leverage<br>
> as cache or similar to improve performance, since it is
already there.<br>
> Advice on how to leverage the SSDs would be greatly
appreciated.<br>
><br>
> One of the design choices I have to make is using 3
nodes for a<br>
> replica-3 with JBOD, or using 2 nodes with a replica-2
and using SW RAID<br>
> 6 for the disks, maybe adding a 3rd node with a smaller
amount of disk<br>
> as metadata node for the replica set. I would love to
hear advice on the<br>
> pros and cons of each setup from the gluster experts.<br>
><br>
> The data will be accessed from 4 to 6 systems with
native gluster,<br>
> not sure if that makes any difference.<br>
><br>
> The amount of data I have to store there is currently
20 TB, with<br>
> moderate growth. iops are quite low so high performance
is not an issue.<br>
> The data will fit in any of the two setups.<br>
><br>
> Thanks in advance for your advice!<br>
><br>
> --<br>
> Eduardo Mayoral Jimeno<br>
> Systems engineer, platform department. Arsys Internet.<br>
> <a href="mailto:emayoral@arsys.es" target="_blank"
moz-do-not-send="true">emayoral@arsys.es</a> - +34 941 620
105 - ext 2153<br>
><br>
><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org"
target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
> <a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank"
moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
</blockquote>
<pre class="moz-signature" cols="72">--
Eduardo Mayoral Jimeno
Systems engineer, platform department. Arsys Internet.
<a class="moz-txt-link-abbreviated" href="mailto:emayoral@arsys.es">emayoral@arsys.es</a> - +34 941 620 105 - ext 2153</pre>
</body>
</html>