<div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">>> What's your workload? </div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">I have 6 KVM VMs which have Windows and Linux installed on it.</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">>> Read?</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">>> Write? </div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">iostat (I am using sdc as the main storage)</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">cavg-cpu: %user %nice %system %iowait %steal %idle<br> 9.15 0.00 1.25 1.38 0.00 88.22<br><br>Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util<br>sdc 0.00 1.00 0.00 1.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.50 <br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">>> sequential? random? </div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">sequential</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">>> many files? </div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif">6 files 500G 200G 200G 250G 200G 100G size each.</span><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">With more bricks and nodes, you should probably use sharding.</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">For now I have only two bricks/nodes.... Plan for more is now out of the question!</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">What are your expectations, btw?</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">I ran many environments with Proxmox Virtual Environment, which use QEMU (not virt) and LXC...But I use majority KVM (QEMU) virtual machines.</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">My goal is to use glusterfs since I think it's more resource demanding such as memory and cpu and nic, when compared to ZFS or CEPH.</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">(47) 3025-5907</span><br></div><div><font size="4"><b></b></font></div><div><span style="font-size:12.8px">(47) 99676-7530 - Whatsapp / Telegram</span><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px">Skype: gilberto.nunes36</p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em ter., 18 de ago. de 2020 às 10:29, sankarshan <<a href="mailto:sankarshan.mukhopadhyay@gmail.com">sankarshan.mukhopadhyay@gmail.com</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Tue, 18 Aug 2020 at 18:50, Yaniv Kaul <<a href="mailto:ykaul@redhat.com" target="_blank">ykaul@redhat.com</a>> wrote:<br>
><br>
><br>
><br>
> On Tue, Aug 18, 2020 at 3:57 PM Gilberto Nunes <<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>> wrote:<br>
>><br>
>> Hi friends...<br>
>><br>
>> I have a 2-nodes GlusterFS, with has the follow configuration:<br>
>> gluster vol info<br>
>><br>
<br>
I'd be interested in the chosen configuration for this deployment -<br>
the 2 node set up. Was there a specific requirement which led to this?<br>
<br>
>> Volume Name: VMS<br>
>> Type: Replicate<br>
>> Volume ID: a4ec9cfb-1bba-405c-b249-8bd5467e0b91<br>
>> Status: Started<br>
>> Snapshot Count: 0<br>
>> Number of Bricks: 1 x 2 = 2<br>
>> Transport-type: tcp<br>
>> Bricks:<br>
>> Brick1: server02:/DATA/vms<br>
>> Brick2: server01:/DATA/vms<br>
>> Options Reconfigured:<br>
>> performance.read-ahead: off<br>
>> performance.io-cache: on<br>
>> performance.cache-refresh-timeout: 1<br>
>> performance.cache-size: 1073741824<br>
>> performance.io-thread-count: 64<br>
>> performance.write-behind-window-size: 64MB<br>
>> cluster.granular-entry-heal: enable<br>
>> cluster.self-heal-daemon: enable<br>
>> performance.client-io-threads: on<br>
>> cluster.data-self-heal-algorithm: full<br>
>> cluster.favorite-child-policy: mtime<br>
>> network.ping-timeout: 2<br>
>> cluster.quorum-count: 1<br>
>> cluster.quorum-reads: false<br>
>> cluster.heal-timeout: 20<br>
>> storage.fips-mode-rchecksum: on<br>
>> transport.address-family: inet<br>
>> nfs.disable: on<br>
>><br>
>> HDDs are SSD and SAS<br>
>> Network connections between the servers are dedicated 1GB (no switch!).<br>
><br>
><br>
> You can't get good performance on 1Gb.<br>
>><br>
>> Files are 500G 200G 200G 250G 200G 100G size each.<br>
>><br>
>> Performance so far so good is ok...<br>
><br>
><br>
> What's your workload? Read? Write? sequential? random? many files?<br>
> With more bricks and nodes, you should probably use sharding.<br>
><br>
> What are your expectations, btw?<br>
> Y.<br>
><br>
>><br>
>> Any other advice which could point me, let me know!<br>
>><br>
>> Thanks<br>
>><br>
>><br>
>><br>
>> ---<br>
>> Gilberto Nunes Ferreira<br>
>><br>
>> ________<br>
>><br>
>><br>
>><br>
>> Community Meeting Calendar:<br>
>><br>
>> Schedule -<br>
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
>> Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
>><br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
> ________<br>
><br>
><br>
><br>
> Community Meeting Calendar:<br>
><br>
> Schedule -<br>
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
> Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
><br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
<br>
<br>
-- <br>
sankarshan mukhopadhyay<br>
<<a href="https://about.me/sankarshan.mukhopadhyay" rel="noreferrer" target="_blank">https://about.me/sankarshan.mukhopadhyay</a>><br>
</blockquote></div>