<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class="">A few thoughts from another ZFS backend user:</div><div class=""><br class=""></div>ZFS:<div class="">use arcstats to look at your cache use over time and consider:<br class=""><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>Don’t mirror your cache drives, use them as 2x cache volumes to increase available cache.<div class=""><span class="Apple-tab-span" style="white-space:pre"> </span><span style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0);" class="">Add more RAM. Lots more RAM (if I’m reading that right and you have 32Gb ram per zfs server).</span></div><div class=""><span class="Apple-tab-span" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: pre;"> </span><font color="#000000" class=""><span style="caret-color: rgb(0, 0, 0);" class="">Adjust ZFS’s max arc caching upwards if you have lots of RAM.</span></font></div><div class=""><font color="#000000" class=""><span class="Apple-tab-span" style="caret-color: rgb(0, 0, 0); white-space: pre;"> </span>Try more metadata caching & less content caching if you’re find heavy.</font></div><div class=""><font color="#000000" class="">compression on these volumes could help improve IO on the raidZ2s, but you’ll have to copy the data on with compression enabled if you didn’t already have it enabled. Different zStd levels are worth evaluating here.</font></div><div class=""><font color="#000000" class="">Read up on recordsize and consider if you would get any performance benefits from 64K or maybe something larger for your large data, depends on where the reads are being done. </font></div><div class="">Use relatime or no atime tracking.</div><div class="">Upgrade to ZFS 2.0.6 if you aren’t already at 2 or 2.1</div><div class=""><br class=""></div><div class="">For gluster, sounds like gluster 10 would be good for your use case. Without knowing what your workload is (VMs, gluster mounts, nfs mounts?), I don’t have much else on that level, but you can probably play with the cluster.read-hash-mode (try 3) to spread the read load out amongst your servers. Search the list archives for general performance hints too, server & client .event-threads are probably good targets, and the various performance.*threads may/may not help depending on how the volumes are being used.</div><div class=""><br class=""></div><div class="">More details (zfs version, gluster version, volume options currently applied, more details on the workload) may help if others use similar setups. You may be getting into the area where you just need to get your environment setup to try some A/B testing with different options though.</div><div class=""><br class=""></div><div class="">Good luck!</div><div class=""><br class=""></div><div class=""> -Darrell</div><div class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Dec 11, 2021, at 5:27 PM, Arman Khalatyan <<a href="mailto:arm2arm@gmail.com" class="">arm2arm@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="auto" class="">Hello everybody,<div dir="auto" class="">I was looking for some performance consideration on glusterfs with zfs.</div><div dir="auto" class="">The data diversity is following: 90% <50kb and 10%>10GB-100GB . totally over 100mln, about 100TB.</div><div dir="auto" class="">3replicated Jbods each one with:</div><div dir="auto" class="">2x8disks-RaidZ2 +special device mirror 2x1TBnvme+cache mirror 2xssd+32GB ram.</div><div dir="auto" class=""><br class=""></div><div dir="auto" class="">most operations are reading and "find file".</div><div dir="auto" class="">i put some parameters on zfs like: xattr=sa, primarycache=all, secondary cache=all</div><div dir="auto" class="">what else could be tuned?</div><div dir="auto" class="">thank you in advanced.</div><div dir="auto" class="">greetings from Potsdam,</div><div dir="auto" class="">Arman.</div><div dir="auto" class=""><br class=""></div></div>
________<br class=""><br class=""><br class=""><br class="">Community Meeting Calendar:<br class=""><br class="">Schedule -<br class="">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br class="">Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" class="">https://meet.google.com/cpu-eiue-hvk</a><br class="">Gluster-users mailing list<br class=""><a href="mailto:Gluster-users@gluster.org" class="">Gluster-users@gluster.org</a><br class="">https://lists.gluster.org/mailman/listinfo/gluster-users<br class=""></div></blockquote></div><br class=""></div></div></div></body></html>