<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Replication would be better yes but HA isn't a hard requirement whereas the most likely loss of a brick would be power.&nbsp; In that case we could stop the entire file system then bring the brick back up should users complain about poor I/O performance.</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Could you share more about your configuration at that time?&nbsp; What CPUs were you running on bricks, number of spindles per brick, etc?</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div id="Signature">
<div id="divtagdefaultwrapper" dir="ltr" style="font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif">
<p style="margin-top: 0px; margin-bottom: 0px;margin-top:0; margin-bottom:0"><span id="ms-rterangepaste-start"></span><span style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">--&nbsp;</span><br style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">
<span style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">Thanks,</span><br style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">
<br style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">
<span style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">Douglas Duckworth, MSc, LFCS</span><br style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">
<span style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">HPC System Administrator</span><br style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">
<span style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px"><a href="https://scu.med.cornell.edu/" class="OWAAutoLink">Scientific Computing Unit</a></span><br style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">
<span style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">Weill Cornell Medicine</span><br style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">
<span style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">E:&nbsp;doug@med.cornell.edu</span><br style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">
<span style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">O: 212-746-6305</span><br style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">
<span style="color:rgb(33,33,33); font-family:wf_segoe-ui_normal,&quot;Segoe UI&quot;,&quot;Segoe WP&quot;,Tahoma,Arial,sans-serif,serif,EmojiFont; font-size:14.6667px">F: 212-746-8690</span><span id="ms-rterangepaste-end"></span><br>
</p>
</div>
</div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Serkan Çoban &lt;cobanserkan@gmail.com&gt;<br>
<b>Sent:</b> Thursday, February 13, 2020 12:38 PM<br>
<b>To:</b> Douglas Duckworth &lt;dod2014@med.cornell.edu&gt;<br>
<b>Cc:</b> gluster-users@gluster.org &lt;gluster-users@gluster.org&gt;<br>
<b>Subject:</b> [EXTERNAL] Re: [Gluster-users] multi petabyte gluster dispersed for archival?</font>
<div>&nbsp;</div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">Do not use EC with small files. You cannot tolerate losing a 300TB<br>
brick, reconstruction will take ages. When I was using glusterfs<br>
reconstruction speed of ec was 10-15MB/sec. If you do not loose bricks<br>
you will be ok.<br>
<br>
On Thu, Feb 13, 2020 at 7:38 PM Douglas Duckworth<br>
&lt;dod2014@med.cornell.edu&gt; wrote:<br>
&gt;<br>
&gt; Hello<br>
&gt;<br>
&gt; I am thinking of building a Gluster file system for archival data.&nbsp; Initially it will start as 6 brick dispersed volume then expand to distributed dispersed as we increase capacity.<br>
&gt;<br>
&gt; Since metadata in Gluster isn't centralized it will eventually not perform well at scale.&nbsp; So I am wondering if anyone can help identify that point?&nbsp; Ceph can scale to extremely high levels though the complexity required for management seems much greater
 than Gluster.<br>
&gt;<br>
&gt; The first six bricks would be a little over 2PB of raw space.&nbsp; Each server will have 24 7200 RPM NL-SAS drives sans RAID.&nbsp; I estimate we would max out at about 100 million files within these first six servers, though that can be reduced by having users tar
 their small files before writing to Gluster.&nbsp;&nbsp; I/O patterns would be sequential upon initial copy with very infrequent reads thereafter.&nbsp; Given the demands of erasure coding, especially if we lose a brick, the CPUs will be high thread count AMD Rome.&nbsp; The
 back-end network would be EDR Infiniband, so I will mount via RDMA, while all bricks will be leaf local.<br>
&gt;<br>
&gt; Given these variables can anyone say whether Gluster would be able to operate at this level of metadata and continue to scale?&nbsp; If so where could it break, 4PB, 12PB, with that being defined as I/O, with all bricks still online, breaking down dramatically?<br>
&gt;<br>
&gt; Thank you!<br>
&gt; Doug<br>
&gt;<br>
&gt; --<br>
&gt; Thanks,<br>
&gt;<br>
&gt; Douglas Duckworth, MSc, LFCS<br>
&gt; HPC System Administrator<br>
&gt; Scientific Computing Unit<br>
&gt; Weill Cornell Medicine<br>
&gt; E: doug@med.cornell.edu<br>
&gt; O: 212-746-6305<br>
&gt; F: 212-746-8690<br>
&gt;<br>
&gt; ________<br>
&gt;<br>
&gt; Community Meeting Calendar:<br>
&gt;<br>
&gt; APAC Schedule -<br>
&gt; Every 2nd and 4th Tuesday at 11:30 AM IST<br>
&gt; Bridge: <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__bluejeans.com_441850968&amp;d=DwIFaQ&amp;c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu2s&amp;r=2Fzhh_78OGspKQpl_e-CbhH6xUjnRkaqPFUS2wTJ2cw&amp;m=SsvW0KsQAhI5SQf6z4WQde56D5y5zBm3wJkCyyiVj6E&amp;s=-tl_YiEBCYUEm7rzTkbvmTck0LsAurEd9DJaq8v5-fc&amp;e=">
https://urldefense.proofpoint.com/v2/url?u=https-3A__bluejeans.com_441850968&amp;d=DwIFaQ&amp;c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu2s&amp;r=2Fzhh_78OGspKQpl_e-CbhH6xUjnRkaqPFUS2wTJ2cw&amp;m=SsvW0KsQAhI5SQf6z4WQde56D5y5zBm3wJkCyyiVj6E&amp;s=-tl_YiEBCYUEm7rzTkbvmTck0LsAurEd9DJaq8v5-fc&amp;e=</a>
<br>
&gt;<br>
&gt; NA/EMEA Schedule -<br>
&gt; Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
&gt; Bridge: <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__bluejeans.com_441850968&amp;d=DwIFaQ&amp;c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu2s&amp;r=2Fzhh_78OGspKQpl_e-CbhH6xUjnRkaqPFUS2wTJ2cw&amp;m=SsvW0KsQAhI5SQf6z4WQde56D5y5zBm3wJkCyyiVj6E&amp;s=-tl_YiEBCYUEm7rzTkbvmTck0LsAurEd9DJaq8v5-fc&amp;e=">
https://urldefense.proofpoint.com/v2/url?u=https-3A__bluejeans.com_441850968&amp;d=DwIFaQ&amp;c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu2s&amp;r=2Fzhh_78OGspKQpl_e-CbhH6xUjnRkaqPFUS2wTJ2cw&amp;m=SsvW0KsQAhI5SQf6z4WQde56D5y5zBm3wJkCyyiVj6E&amp;s=-tl_YiEBCYUEm7rzTkbvmTck0LsAurEd9DJaq8v5-fc&amp;e=</a>
<br>
&gt;<br>
&gt; Gluster-users mailing list<br>
&gt; Gluster-users@gluster.org<br>
&gt; <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&amp;d=DwIFaQ&amp;c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu2s&amp;r=2Fzhh_78OGspKQpl_e-CbhH6xUjnRkaqPFUS2wTJ2cw&amp;m=SsvW0KsQAhI5SQf6z4WQde56D5y5zBm3wJkCyyiVj6E&amp;s=i7jvEHb-wZksUurCWq828kigRsSxfrAiNWxT7ORcgFs&amp;e=">
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&amp;d=DwIFaQ&amp;c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu2s&amp;r=2Fzhh_78OGspKQpl_e-CbhH6xUjnRkaqPFUS2wTJ2cw&amp;m=SsvW0KsQAhI5SQf6z4WQde56D5y5zBm3wJkCyyiVj6E&amp;s=i7jvEHb-wZksUurCWq828kigRsSxfrAiNWxT7ORcgFs&amp;e=</a>
<br>
</div>
</span></font></div>
</body>
</html>