<div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">&gt;&gt; What&#39;s your workload? </div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">I have 6 KVM VMs which have Windows and Linux installed on it.</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">&gt;&gt; Read?</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">&gt;&gt; Write? </div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">iostat (I am using sdc as the main storage)</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">cavg-cpu:  %user   %nice %system %iowait  %steal   %idle<br>           9.15    0.00    1.25    1.38    0.00   88.22<br><br>Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util<br>sdc              0.00    1.00      0.00      1.50     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     1.50 <br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">&gt;&gt; sequential? random? </div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">sequential</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">&gt;&gt; many files? </div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif">6 files  500G 200G 200G 250G 200G 100G size each.</span><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">With more bricks and nodes, you should probably use sharding.</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">For now I have only two bricks/nodes.... Plan for more is now out of the question!</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">What are your expectations, btw?</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">I ran many environments with Proxmox Virtual Environment, which use QEMU (not virt) and LXC...But I use majority KVM (QEMU) virtual machines.</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">My goal is to use glusterfs since I think it&#39;s more resource demanding such as memory and cpu and nic, when compared to ZFS or CEPH.</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">(47) 3025-5907</span><br></div><div><font size="4"><b></b></font></div><div><span style="font-size:12.8px">(47) 99676-7530 - Whatsapp / Telegram</span><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px">Skype: gilberto.nunes36</p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em ter., 18 de ago. de 2020 às 10:29, sankarshan &lt;<a href="mailto:sankarshan.mukhopadhyay@gmail.com">sankarshan.mukhopadhyay@gmail.com</a>&gt; escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Tue, 18 Aug 2020 at 18:50, Yaniv Kaul &lt;<a href="mailto:ykaul@redhat.com" target="_blank">ykaul@redhat.com</a>&gt; wrote:<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; On Tue, Aug 18, 2020 at 3:57 PM Gilberto Nunes &lt;<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>&gt; wrote:<br>
&gt;&gt;<br>
&gt;&gt; Hi friends...<br>
&gt;&gt;<br>
&gt;&gt; I have a 2-nodes GlusterFS, with has the follow configuration:<br>
&gt;&gt; gluster vol info<br>
&gt;&gt;<br>
<br>
I&#39;d be interested in the chosen configuration for this deployment -<br>
the 2 node set up. Was there a specific requirement which led to this?<br>
<br>
&gt;&gt; Volume Name: VMS<br>
&gt;&gt; Type: Replicate<br>
&gt;&gt; Volume ID: a4ec9cfb-1bba-405c-b249-8bd5467e0b91<br>
&gt;&gt; Status: Started<br>
&gt;&gt; Snapshot Count: 0<br>
&gt;&gt; Number of Bricks: 1 x 2 = 2<br>
&gt;&gt; Transport-type: tcp<br>
&gt;&gt; Bricks:<br>
&gt;&gt; Brick1: server02:/DATA/vms<br>
&gt;&gt; Brick2: server01:/DATA/vms<br>
&gt;&gt; Options Reconfigured:<br>
&gt;&gt; performance.read-ahead: off<br>
&gt;&gt; performance.io-cache: on<br>
&gt;&gt; performance.cache-refresh-timeout: 1<br>
&gt;&gt; performance.cache-size: 1073741824<br>
&gt;&gt; performance.io-thread-count: 64<br>
&gt;&gt; performance.write-behind-window-size: 64MB<br>
&gt;&gt; cluster.granular-entry-heal: enable<br>
&gt;&gt; cluster.self-heal-daemon: enable<br>
&gt;&gt; performance.client-io-threads: on<br>
&gt;&gt; cluster.data-self-heal-algorithm: full<br>
&gt;&gt; cluster.favorite-child-policy: mtime<br>
&gt;&gt; network.ping-timeout: 2<br>
&gt;&gt; cluster.quorum-count: 1<br>
&gt;&gt; cluster.quorum-reads: false<br>
&gt;&gt; cluster.heal-timeout: 20<br>
&gt;&gt; storage.fips-mode-rchecksum: on<br>
&gt;&gt; transport.address-family: inet<br>
&gt;&gt; nfs.disable: on<br>
&gt;&gt;<br>
&gt;&gt; HDDs are SSD and SAS<br>
&gt;&gt; Network connections between the servers are dedicated 1GB (no switch!).<br>
&gt;<br>
&gt;<br>
&gt; You can&#39;t get good performance on 1Gb.<br>
&gt;&gt;<br>
&gt;&gt; Files are 500G 200G 200G 250G 200G 100G size each.<br>
&gt;&gt;<br>
&gt;&gt; Performance so far so good is ok...<br>
&gt;<br>
&gt;<br>
&gt; What&#39;s your workload? Read? Write? sequential? random? many files?<br>
&gt; With more bricks and nodes, you should probably use sharding.<br>
&gt;<br>
&gt; What are your expectations, btw?<br>
&gt; Y.<br>
&gt;<br>
&gt;&gt;<br>
&gt;&gt; Any other advice which could point me, let me know!<br>
&gt;&gt;<br>
&gt;&gt; Thanks<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; ---<br>
&gt;&gt; Gilberto Nunes Ferreira<br>
&gt;&gt;<br>
&gt;&gt; ________<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; Community Meeting Calendar:<br>
&gt;&gt;<br>
&gt;&gt; Schedule -<br>
&gt;&gt; Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
&gt;&gt; Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
&gt;&gt;<br>
&gt;&gt; Gluster-users mailing list<br>
&gt;&gt; <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
&gt;&gt; <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
&gt;<br>
&gt; ________<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Community Meeting Calendar:<br>
&gt;<br>
&gt; Schedule -<br>
&gt; Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
&gt; Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
&gt;<br>
&gt; Gluster-users mailing list<br>
&gt; <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
&gt; <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
<br>
<br>
-- <br>
sankarshan mukhopadhyay<br>
&lt;<a href="https://about.me/sankarshan.mukhopadhyay" rel="noreferrer" target="_blank">https://about.me/sankarshan.mukhopadhyay</a>&gt;<br>
</blockquote></div>